Science.gov

Sample records for a-posteriori error estimation

  1. A posteriori error estimates for Maxwell equations

    NASA Astrophysics Data System (ADS)

    Schoeberl, Joachim

    2008-06-01

    Maxwell equations are posed as variational boundary value problems in the function space H(operatorname{curl}) and are discretized by Nedelec finite elements. In Beck et al., 2000, a residual type a posteriori error estimator was proposed and analyzed under certain conditions onto the domain. In the present paper, we prove the reliability of that error estimator on Lipschitz domains. The key is to establish new error estimates for the commuting quasi-interpolation operators recently introduced in J. Schoeberl, Commuting quasi-interpolation operators for mixed finite elements. Similar estimates are required for additive Schwarz preconditioning. To incorporate boundary conditions, we establish a new extension result.

  2. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  3. Implicit a posteriori error estimates for the Maxwell equations

    NASA Astrophysics Data System (ADS)

    Izsak, Ferenc; Harutyunyan, Davit; van der Vegt, Jaap J. W.

    2008-09-01

    An implicit a posteriori error estimation technique is presented and analyzed for the numerical solution of the time-harmonic Maxwell equations using Nedelec edge elements. For this purpose we define a weak formulation for the error on each element and provide an efficient and accurate numerical solution technique to solve the error equations locally. We investigate the well-posedness of the error equations and also consider the related eigenvalue problem for cubic elements. Numerical results for both smooth and non-smooth problems, including a problem with reentrant corners, show that an accurate prediction is obtained for the local error, and in particular the error distribution, which provides essential information to control an adaptation process. The error estimation technique is also compared with existing methods and provides significantly sharper estimates for a number of reported test cases.

  4. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  5. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  6. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  7. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  8. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  9. A posteriori error estimation for hp -adaptivity for fourth-order equations

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.; Rangelova, Marina

    2010-04-01

    A posteriori error estimates developed to drive hp-adaptivity for second-order reaction-diffusion equations are extended to fourth-order equations. A C^1 hierarchical finite element basis is constructed from Hermite-Lobatto polynomials. A priori estimates of the error in several norms for both the interpolant and finite element solution are derived. In the latter case this requires a generalization of the well-known Aubin-Nitsche technique to time-dependent fourth-order equations. We show that the finite element solution and corresponding Hermite-Lobatto interpolant are asymptotically equivalent. A posteriori error estimators based on this equivalence for solutions at two orders are presented. Both are shown to be asymptotically exact on grids of uniform order. These estimators can be used to control various adaptive strategies. Computational results for linear steady-state and time-dependent equations corroborate the theory and demonstrate the effectiveness of the estimators in adaptive settings.

  10. Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.

    SciTech Connect

    Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.

    2005-07-01

    An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.

  11. A posteriori error estimation of H1 mixed finite element method for the Benjamin-Bona-Mahony equation

    NASA Astrophysics Data System (ADS)

    Shafie, Sabarina; Tran, Thanh

    2017-08-01

    Error estimations of H1 mixed finite element method for the Benjamin-Bona-Mahony equation are considered. The problem is reformulated into a system of first order partial differential equations, which allows an approximation of the unknown function and its derivative. Local parabolic error estimates are introduced to approximate the true errors from the computed solutions; the so-called a posteriori error estimates. Numerical experiments show that the a posteriori error estimates converge to the true errors of the problem.

  12. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  13. Superconvergence and recovery type a posteriori error estimation for hybrid stress finite element method

    NASA Astrophysics Data System (ADS)

    Bai, YanHong; Wu, YongKe; Xie, XiaoPing

    2016-09-01

    Superconvergence and a posteriori error estimators of recovery type are analyzed for the 4-node hybrid stress quadrilateral finite element method proposed by Pian and Sumihara (Int. J. Numer. Meth. Engrg., 1984, 20: 1685-1695) for linear elasticity problems. Uniform superconvergence of order $O(h^{1+\\min\\{\\alpha,1\\}})$ with respect to the Lam\\'{e} constant $\\lambda$ is established for both the recovered gradients of the displacement vector and the stress tensor under a mesh assumption, where $\\alpha>0$ is a parameter characterizing the distortion of meshes from parallelograms to quadrilaterals. A posteriori error estimators based on the recovered quantities are shown to be asymptotically exact. Numerical experiments confirm the theoretical results.

  14. A posteriori error estimates for the Johnson–Nédélec FEM–BEM coupling

    PubMed Central

    Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

    2012-01-01

    Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h−h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches. PMID:22347772

  15. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  16. Local a posteriori estimates for pointwise gradient errors in finite element methods for elliptic problems

    NASA Astrophysics Data System (ADS)

    Demlow, Alan

    2007-03-01

    We prove local a posteriori error estimates for pointwise gradient errors in finite element methods for a second-order linear elliptic model problem. First we split the local gradient error into a computable local residual term and a weaker global norm of the finite element error (the ``pollution term''). Using a mesh-dependent weight, the residual term is bounded in a sharply localized fashion. In specific situations the pollution term may also be bounded by computable residual estimators. On nonconvex polygonal and polyhedral domains in two and three space dimensions, we may choose estimators for the pollution term which do not employ specific knowledge of corner singularities and which are valid on domains with cracks. The finite element mesh is only required to be simplicial and shape-regular, so that highly graded and unstructured meshes are allowed.

  17. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  18. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  19. Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

  20. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  2. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  3. A functional-type a posteriori error estimate of approximate solutions for Reissner-Mindlin plates and its implementation

    NASA Astrophysics Data System (ADS)

    Frolov, Maxim; Chistiakova, Olga

    2017-06-01

    Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.

  4. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  5. An a-posteriori error estimator for linear elastic fracture mechanics using the stable generalized/extended finite element method

    NASA Astrophysics Data System (ADS)

    Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.

    2015-12-01

    In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.

  6. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    UNCLASSIFIED N G SZYMCZAK ET AL. MAR 83 BN-i@82 F/G 12/1 NL I hhhhhhh EhhhhhhhhhhhE mhhhhomhhlhhhEIEEIEEEEEIlUso o.4 Q.8 L-A -J1 IIIII1 L MICROCOPY...AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION-DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...ESTIMATES AND ADAPTIVITY 6. PERFORMING OR. REPORT NMBER 7. AUTNOR(e) S. CONTRACT OR GRANT NUM11CR’ s) W. G. Szymczak and I. Babu~ka ONR N00014-77-0623 S

  7. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  8. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    NASA Astrophysics Data System (ADS)

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-06-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence.

  9. A posteriori error estimates for continuous/discontinuous Galerkin approximations of the Kirchhoff-Love buckling problem

    NASA Astrophysics Data System (ADS)

    Hansbo, Peter; Larson, Mats G.

    2015-11-01

    Second order buckling theory involves a one-way coupled coupled problem where the stress tensor from a plane stress problem appears in an eigenvalue problem for the fourth order Kirchhoff plate. In this paper we present an a posteriori error estimate for the critical buckling load and mode corresponding to the smallest eigenvalue and associated eigenvector. A particular feature of the analysis is that we take the effect of approximate computation of the stress tensor and also provide an error indicator for the plane stress problem. The Kirchhoff plate is discretized using a continuous/discontinuous finite element method based on standard continuous piecewise polynomial finite element spaces. The same finite element spaces can be used to solve the plane stress problem.

  10. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    SciTech Connect

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximate weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.

  11. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    SciTech Connect

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximate weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.

  12. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  13. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  14. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  15. A posteriori error estimators for the discrete ordinates approximation of the one-speed neutron transport equation

    SciTech Connect

    O'Brien, S.; Azmy, Y. Y.

    2013-07-01

    When calculating numerical solutions of the neutron transport equation it is important to have a measure of the accuracy of the solution. As the true solution is generally not known, a suitable estimation of the error must be made. The steady state transport equation possesses discretization errors in all its independent variables: angle, energy and space. In this work only spatial discretization errors are considered. An exact transport solution, in which the degree of regularity of the exact flux across the singular characteristic is controlled, is manufactured to determine the numerical solutions true discretization error. This solution is then projected onto a Legendre polynomial space in order to form an exact solution on the same basis space as the numerical solution, Discontinuous Galerkin Finite Element Method (DGFEM), to enable computation of the true error. Over a series of test problems the true error is compared to the error estimated by: Ragusa and Wang (RW), residual source (LER) and cell discontinuity estimators (JD). The validity and accuracy of the considered estimators are primarily assessed by considering the effectivity index and global L2 norm of the error. In general RW excels at approximating the true error distribution but usually under-estimates its magnitude; the LER estimator emulates the true error distribution but frequently over-estimates the magnitude of the true error; the JD estimator poorly captures the true error distribution and generally under-estimates the error about singular characteristics but over-estimates it elsewhere. (authors)

  16. An asymptotically exact, pointwise, a posteriori error estimator for the finite element method with super convergence properties

    SciTech Connect

    Hugger, J.

    1995-12-31

    When the finite element solution of a variational problem possesses certain super convergence properties, it is possible very inexpensively to obtain a correction term providing an additional order of approximation of the solution. The correction can be used for error estimation locally or globally in whatever norm is preferred, or if no error estimation is wanted it can be used for postprocessing of the solution to improve the quality. In this paper such a correction term is described for the general case of n dimensional, linear or nonlinear problems. Computational evidence of the performance in one space dimension is given with special attention to the effects of the appearance of singularities and zeros of derivatives in the exact solution.

  17. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  18. A comparative estimation of the errors in the sunspot coordinate catalog compiled at Cuba and the methods of their a posteriori decrease.

    NASA Astrophysics Data System (ADS)

    Nagovitsyn, Yu. A.; Nikonov, O. V.; Perez Doval, J.

    1992-06-01

    A comparison of the accuracy of the Cuba, Greenwich and Debrecen catalogs of sunspot coordinates has been made. A new method for a posteriori decrease of coordinate errors is given. The following conclusions have been made: 1. The accuracy of absolute heliographic coordinates for the Cuban catalog is 0.26 and for the Greenwich catalog is 0.32 of the heliographic degree. 2. Reduction to smoothed coordinate values improves the accuracy by a factor of 1.5. 3. Reduction values within the frame of the proposed technique REPORT to "pseudorelative" coordinates enables an improvement of the initial accuracy of sunspot coordinate measurement by 5 - 7 times.

  19. Pollution Error in the h-Version of the Finite Element Method and the Local Quality of A-Posteriori Error Estimators

    DTIC Science & Technology

    1994-02-01

    stop; otherwise proceed to the next step. Here N denotes the number of elements in the mesh TI. 4. Compute the target error e,o, 5, for the optimal mesh...using the principle of equidistribution of error) namely etarget - biIIutIIlfl (4.2) 5. For each element 7, predict the optimal local mesh-size from...the formula h =pt h (4.3) etarget Here h*t is the predicted optimal mesh-size for the subdomain within the el- ement ’r, r is an exponent which

  20. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    SciTech Connect

    Estep, Donald

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  1. Extracting volatility signal using maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  2. Cost functions to estimate a posteriori probabilities in multiclass problems.

    PubMed

    Cid-Sueiro, J; Arribas, J I; Urbán-Muñoz, S; Figueiras-Vidal, A R

    1999-01-01

    The problem of designing cost functions to estimate a posteriori probabilities in multiclass problems is addressed in this paper. We establish necessary and sufficient conditions that these costs must satisfy in one-class one-output networks whose outputs are consistent with probability laws. We focus our attention on a particular subset of the corresponding cost functions; those which verify two usually interesting properties: symmetry and separability (well-known cost functions, such as the quadratic cost or the cross entropy are particular cases in this subset). Finally, we present a universal stochastic gradient learning rule for single-layer networks, in the sense of minimizing a general version of these cost functions for a wide family of nonlinear activation functions.

  3. A Posteriori Correction of Forecast and Observation Error Variances

    NASA Technical Reports Server (NTRS)

    Rukhovets, Leonid

    2005-01-01

    Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.

  4. A posteriori compensation of the systematic error due to polynomial interpolation in digital image correlation

    NASA Astrophysics Data System (ADS)

    Baldi, Antonio; Bertolino, Filippo

    2013-10-01

    It is well known that displacement components estimated using digital image correlation are affected by a systematic error due to the polynomial interpolation required by the numerical algorithm. The magnitude of bias depends on the characteristics of the speckle pattern (i.e., the frequency content of the image), on the fractional part of displacements and on the type of polynomial used for intensity interpolation. In literature, B-Spline polynomials are pointed out as being able to introduce the smaller errors, whereas bilinear and cubic interpolants generally give the worst results. However, the small bias of B-Spline polynomials is partially counterbalanced by a somewhat larger execution time. We will try to improve the accuracy of lower order polynomials by a posteriori correcting their results so as to obtain a faster and more accurate analysis.

  5. Consistent robust a posteriori error majorants for approximate solutions of diffusion-reaction equations

    NASA Astrophysics Data System (ADS)

    Korneev, V. G.

    2016-11-01

    Efficiency of the error control of numerical solutions of partial differential equations entirely depends on the two factors: accuracy of an a posteriori error majorant and the computational cost of its evaluation for some test function/vector-function plus the cost of the latter. In the paper consistency of an a posteriori bound implies that it is the same in the order with the respective unimprovable a priori bound. Therefore, it is the basic characteristic related to the first factor. The paper is dedicated to the elliptic diffusion-reaction equations. We present a guaranteed robust a posteriori error majorant effective at any nonnegative constant reaction coefficient (r.c.). For a wide range of finite element solutions on a quasiuniform meshes the majorant is consistent. For big values of r.c. the majorant coincides with the majorant of Aubin (1972), which, as it is known, for relatively small r.c. (< ch -2 ) is inconsistent and looses its sense at r.c. approaching zero. Our majorant improves also some other majorants derived for the Poisson and reaction-diffusion equations.

  6. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  7. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  8. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  9. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  10. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  11. Image Bit-depth Enhancement via Maximum-A-Posteriori Estimation of AC Signal.

    PubMed

    Wan, Pengfei; Cheung, Gene; Florencio, Dinei; Zhang, Cha; Au, Oscar C

    2016-04-13

    When images at low bit-depth are rendered at high bit-depth displays, missing least significant bits need to be estimated. We study the image bit-depth enhancement problem: estimating an original image from its quantized version from a minimum mean squared error (MMSE) perspective. We first argue that a graph-signal smoothness prior-one defined on a graph embedding the image structure-is an appropriate prior for the bit-depth enhancement problem. We next show that solving for the MMSE solution directly is in general too computationally expensive to be practical. We then propose an efficient approximation strategy. Specifically, we first estimate the AC component of the desired signal in a maximum a posteriori (MAP) formulation, efficiently computed via convex programming. We then compute the DC component with an MMSE criterion in closed form given the computed AC component. Experiments show that our proposed two-step approach has improved performance over conventional bit-depth enhancement schemes in both objective and subjective comparisons.

  12. Conjugate quasilinear Dirichlet and Neumann problems and a posteriori error bounds

    NASA Technical Reports Server (NTRS)

    Lavery, J. E.

    1976-01-01

    Quasilinear Dirichlet and Neumann problems on a rectangle D with boundary D prime are considered. Using these concepts, conjugate problems, that is, a pair of one Dirichlet and one Neumann problem, the minima of the energies of which add to zero, are introduced. From the concept of conjugate problems, two-sided bounds for the energy of the exact solution of any given Dirichlet or Neumann problem are constructed. These two-sided bounds for the energy at the exact solution are in turn used to obtain a posteriori error bounds for the norm of the difference of the approximate and exact solutions of the problem. These bounds do not involve the unknown exact solution and are easily constructed numerically.

  13. Item exposure control for multidimensional computer adaptive testing under maximum likelihood and expected a posteriori estimation.

    PubMed

    Huebner, Alan R; Wang, Chun; Quinlan, Kari; Seubert, Lauren

    2016-12-01

    Item bank stratification has been shown to be an effective method for combating item overexposure in both uni- and multidimensional computer adaptive testing. However, item bank stratification cannot guarantee that items will not be overexposed-that is, exposed at a rate exceeding some prespecified threshold. In this article, we propose enhancing stratification for multidimensional computer adaptive tests by combining it with the item eligibility method, a technique for controlling the maximum exposure rate in computerized tests. The performance of the method was examined via a simulation study and compared to existing methods of item selection and exposure control. Also, for the first time, maximum likelihood (MLE) and expected a posteriori (EAP) estimation of examinee ability were compared side by side in a multidimensional computer adaptive test. The simulation suggested that the proposed method is effective in suppressing the maximum item exposure rate with very little loss of measurement accuracy and precision. As compared to MLE, EAP generates smaller mean squared errors of the ability estimates in all simulation conditions.

  14. Evaluation of a Maximum A-Posteriori Slope Estimator for a Hartmann Wavefront Sensor

    DTIC Science & Technology

    1997-12-01

    MAXIMUM A-POSTERIORI SLOPE ESTIMATOR FOR A HARTMANN WAVEFRONT SENSOR THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Electrical Engineering Troy B. Van...other post-processing techniques such as inverse filtering or blind deconvolution [1, 20]. Significant research has been done by the Air Force Maui

  15. Object detection and amplitude estimation based on maximum a posteriori reconstructions

    SciTech Connect

    Hanson, K.M.

    1990-01-01

    We report on the behavior of the linear maximum a posteriori (MAP) tomographic reconstruction technique as a function of the assumed rms noise {sigma}{sub n} in the measurements, which specifies the degree of confidence in the measurement data. The unconstrained MAP reconstructions are evaluated on the basis of the performance of two related tasks; object detection and amplitude estimation. It is found that the detectability of medium-sized discs remains constant up to relatively large {sigma}{sub n} before slowly diminishing. However, the amplitudes of the discs estimated from the MAP reconstructions increasingly deviate from their actual values as {sigma}{sub n} increases.

  16. Variance Difference between Maximum Likelihood Estimation Method and Expected A Posteriori Estimation Method Viewed from Number of Test Items

    ERIC Educational Resources Information Center

    Mahmud, Jumailiyah; Sutikno, Muzayanah; Naga, Dali S.

    2016-01-01

    The aim of this study is to determine variance difference between maximum likelihood and expected A posteriori estimation methods viewed from number of test items of aptitude test. The variance presents an accuracy generated by both maximum likelihood and Bayes estimation methods. The test consists of three subtests, each with 40 multiple-choice…

  17. A maximum a posteriori probability time-delay estimation for seismic signals

    NASA Astrophysics Data System (ADS)

    Carrier, A.; Got, J.-L.

    2014-09-01

    Cross-correlation and cross-spectral time delays often exhibit strong outliers due to ambiguities or cycle jumps in the correlation function. Their number increases when signal-to-noise, signal similarity or spectral bandwidth decreases. Such outliers heavily determine the time-delay probability density function and the results of further computations (e.g. double-difference location and tomography) using these time delays. In the present research we expressed cross-correlation as a function of the squared difference between signal amplitudes and show that they are closely related. We used this difference as a cost function whose minimum is reached when signals are aligned. Ambiguities may be removed in this function by using a priori information. We propose using the traveltime difference as a priori time-delay information. By modelling the probability density function of the traveltime difference by a Cauchy distribution and the probability density function of the data (differences of seismic signal amplitudes) by a Laplace distribution we were able to find explicitly the time-delay a posteriori probability density function. The location of the maximum of this a posteriori probability density function is the maximum a posteriori time-delay estimation for earthquake signals. Using this estimation to calculate time delays for earthquakes on the south flank of Kilauea statistically improved the cross-correlation time-delay estimation for these data and resulted in successful double-difference relocation for an increased number of earthquakes. This robust time-delay estimation improves the spatiotemporal resolution of seismicity rates in the south flank of Kilauea.

  18. Three-dimensional super-resolution structured illumination microscopy with maximum a posteriori probability image estimation.

    PubMed

    Lukeš, Tomáš; Křížek, Pavel; Švindrych, Zdeněk; Benda, Jakub; Ovesný, Martin; Fliegel, Karel; Klíma, Miloš; Hagen, Guy M

    2014-12-01

    We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.

  19. Maximum a posteriori probability estimation for localizing damage using ultrasonic guided waves

    NASA Astrophysics Data System (ADS)

    Flynn, Eric B.; Todd, Michael D.; Wilcox, Paul D.; Drinkwater, Bruce W.; Croxford, Anthony J.

    2011-04-01

    Presented is an approach to damage localization for guided wave structural health monitoring (GWSHM) in plate-like structures. In this mode of SHM, transducers excite and sense guided waves in order to detect and characterize the presence of damage. The premise of the presented localization approach is simple: use as the estimated damage location the point on the structure with the maximum a posteriori probability (MAP) of being the location of damage (i.e., the most probable location given a set of sensor measurements). This is accomplished by constructing a minimally-informed statistical model of the GWSHM process. Parameters of the model which are unknown, such as scattered wave amplitude, are assigned non-informative Bayesian prior distributions and averaged out of the a posteriori probability calculation. Using an ensemble of measurements from an instrumented plate with stiffening stringers, the performance of the MAP estimate is compared to that of what were found to be the two most effective previously reported algorithms. The MAP estimate proved superior in nearly all test cases and was particularly effective in localizing damage using very sparse arrays of as few as three transducers.

  20. Phylogenetic assignment of Mycobacterium tuberculosis Beijing clinical isolates in Japan by maximum a posteriori estimation.

    PubMed

    Seto, Junji; Wada, Takayuki; Iwamoto, Tomotada; Tamaru, Aki; Maeda, Shinji; Yamamoto, Kaori; Hase, Atsushi; Murakami, Koichi; Maeda, Eriko; Oishi, Akira; Migita, Yuji; Yamamoto, Taro; Ahiko, Tadayuki

    2015-10-01

    Intra-species phylogeny of Mycobacterium tuberculosis has been regarded as a clue to estimate its potential risk to develop drug-resistance and various epidemiological tendencies. Genotypic characterization of variable number of tandem repeats (VNTR), a standard tool to ascertain transmission routes, has been improving as a public health effort, but determining phylogenetic information from those efforts alone is difficult. We present a platform based on maximum a posteriori (MAP) estimation to estimate phylogenetic information for M. tuberculosis clinical isolates from individual profiles of VNTR types. This study used 1245 M. tuberculosis clinical isolates obtained throughout Japan for construction of an MAP estimation formula. Two MAP estimation formulae, classification of Beijing family and other lineages, and classification of five Beijing sublineages (ST11/26, STK, ST3, and ST25/19 belonging to the ancient Beijing subfamily and modern Beijing subfamily), were created based on 24 loci VNTR (24Beijing-VNTR) profiles and phylogenetic information of the isolates. Recursive estimation based on the formulae showed high concordance with their authentic phylogeny by multi-locus sequence typing (MLST) of the isolates. The formulae might further support phylogenetic estimation of the Beijing lineage M. tuberculosis from the VNTR genotype with various geographic backgrounds. These results suggest that MAP estimation can function as a reliable probabilistic process to append phylogenetic information to VNTR genotypes of M. tuberculosis independently, which might improve the usage of genotyping data for control, understanding, prevention, and treatment of TB.

  1. Noise stochastic corrected maximum a posteriori estimator for birefringence imaging using polarization-sensitive optical coherence tomography

    PubMed Central

    Kasaragod, Deepa; Makita, Shuichi; Hong, Young-Joo; Yasuno, Yoshiaki

    2017-01-01

    This paper presents a noise-stochastic corrected maximum a posteriori estimator for birefringence imaging using Jones matrix optical coherence tomography. The estimator described in this paper is based on the relationship between probability distribution functions of the measured birefringence and the effective signal to noise ratio (ESNR) as well as the true birefringence and the true ESNR. The Monte Carlo method is used to numerically describe this relationship and adaptive 2D kernel density estimation provides the likelihood for a posteriori estimation of the true birefringence. Improved estimation is shown for the new estimator with stochastic model of ESNR in comparison to the old estimator, both based on the Jones matrix noise model. A comparison with the mean estimator is also done. Numerical simulation validates the superiority of the new estimator. The superior performance of the new estimator was also shown by in vivo measurement of optic nerve head. PMID:28270974

  2. Maximum A Posteriori Bayesian Estimation of Chromatographic Parameters by Limited Number of Experiments.

    PubMed

    Wiczling, Paweł; Kubik, Łukasz; Kaliszan, Roman

    2015-07-21

    The aim of this work was to develop a nonlinear mixed-effect chromatographic model able to describe the retention times of weak acids and bases in all possible combinations of organic modifier content and mobile-phase pH. Further, we aimed to identify the influence of basic covariates, like lipophilicity (log P), dissociation constant (pK(a)), and polar surface area (PSA), on the intercompound variability of chromatographic parameters. Lastly, we aimed to propose the optimal limited experimental design to the estimation process of parameters through a maximum a posteriori (MAP) Bayesian method to facilitate the method development process. The data set comprised retention times for two series of organic modifier content collected at different pH for a large series of acids and bases. The obtained typical parameters and their distribution were subsequently used as priors to improve the estimation process from reduced design with a variable number of preliminary experiments. The MAP Bayesian estimator was validated using two external-validation data sets. The common literature model was used to relate analyte retention time with mobile-phase pH and organic modifier content. A set of QSRR-based covariate relationships was established. It turned out that four preliminary experiments and prior information that includes analyte pK(a), log P, acid/base type, and PSA are sufficient to accurately predict analyte retention in virtually all combined changes of pH and organic modifier content. The MAP Bayesian estimator of all important chromatographic parameters controlling retention in pH/organic modifier gradient was developed. It can be used to improve parameter estimation using limited experimental design.

  3. Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography

    PubMed Central

    Gürsoy, Doĝa; Biçer, Tekin; Almer, Jonathan D.; Kettimuthu, Raj; Stock, Stuart R.; De Carlo, Francesco

    2015-01-01

    A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency. PMID:25939627

  4. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes.

  5. Blind deconvolution of images with model discrepancies using maximum a posteriori estimation with heavy-tailed priors

    NASA Astrophysics Data System (ADS)

    Kotera, Jan; Å roubek, Filip

    2015-02-01

    Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.

  6. A Posteriori Analysis for Hydrodynamic Simulations Using Adjoint Methodologies

    SciTech Connect

    Woodward, C S; Estep, D; Sandelin, J; Wang, H

    2009-02-26

    This report contains results of analysis done during an FY08 feasibility study investigating the use of adjoint methodologies for a posteriori error estimation for hydrodynamics simulations. We developed an approach to adjoint analysis for these systems through use of modified equations and viscosity solutions. Targeting first the 1D Burgers equation, we include a verification of the adjoint operator for the modified equation for the Lax-Friedrichs scheme, then derivations of an a posteriori error analysis for a finite difference scheme and a discontinuous Galerkin scheme applied to this problem. We include some numerical results showing the use of the error estimate. Lastly, we develop a computable a posteriori error estimate for the MAC scheme applied to stationary Navier-Stokes.

  7. A Maximum a Posteriori Estimation Framework for Robust High Dynamic Range Video Synthesis

    NASA Astrophysics Data System (ADS)

    Li, Yuelong; Lee, Chul; Monga, Vishal

    2017-03-01

    High dynamic range (HDR) image synthesis from multiple low dynamic range (LDR) exposures continues to be actively researched. The extension to HDR video synthesis is a topic of significant current interest due to potential cost benefits. For HDR video, a stiff practical challenge presents itself in the form of accurate correspondence estimation of objects between video frames. In particular, loss of data resulting from poor exposures and varying intensity make conventional optical flow methods highly inaccurate. We avoid exact correspondence estimation by proposing a statistical approach via maximum a posterior (MAP) estimation, and under appropriate statistical assumptions and choice of priors and models, we reduce it to an optimization problem of solving for the foreground and background of the target frame. We obtain the background through rank minimization and estimate the foreground via a novel multiscale adaptive kernel regression technique, which implicitly captures local structure and temporal motion by solving an unconstrained optimization problem. Extensive experimental results on both real and synthetic datasets demonstrate that our algorithm is more capable of delivering high-quality HDR videos than current state-of-the-art methods, under both subjective and objective assessments. Furthermore, a thorough complexity analysis reveals that our algorithm achieves better complexity-performance trade-off than conventional methods.

  8. Noise-bias and polarization-artifact corrected optical coherence tomography by maximum a-posteriori intensity estimation

    PubMed Central

    Chan, Aaron C.; Hong, Young-Joo; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2017-01-01

    We propose using maximum a-posteriori (MAP) estimation to improve the image signal-to-noise ratio (SNR) in polarization diversity (PD) optical coherence tomography. PD-detection removes polarization artifacts, which are common when imaging highly birefringent tissue or when using a flexible fiber catheter. However, dividing the probe power to two polarization detection channels inevitably reduces the SNR. Applying MAP estimation to PD-OCT allows for the removal of polarization artifacts while maintaining and improving image SNR. The effectiveness of the MAP-PD method is evaluated by comparing it with MAP-non-PD, intensity averaged PD, and intensity averaged non-PD methods. Evaluation was conducted in vivo with human eyes. The MAP-PD method is found to be optimal, demonstrating high SNR and artifact suppression, especially for highly birefringent tissue, such as the peripapillary sclera. The MAP-PD based attenuation coefficient image also shows better differentiation of attenuation levels than non-MAP attenuation images. PMID:28736656

  9. A Kernel Density Estimator-Based Maximum A Posteriori Image Reconstruction Method for Dynamic Emission Tomography Imaging.

    PubMed

    Ihsani, Alvin; Farncombe, Troy H

    2016-05-01

    A novel maximum a posteriori (MAP) method for dynamic single-photon emission computed tomography image reconstruction is proposed. The prior probability is modeled as a multivariate kernel density estimator (KDE), effectively modeling the prior probability non-parametrically, with the aim of reducing the effects of artifacts arising from inconsistencies in projection measurements in low-count regimes where projections are dominated by noise. The proposed prior spatially and temporally limits the variation of time-activity functions (TAFs) and attracts similar TAFs together. The similarity between TAFs is determined by the spatial and range scaling parameters of the KDE-like prior. The resulting iterative image reconstruction method is evaluated using two simulated phantoms, namely the extended cardiac-torso (XCAT) heart phantom and a simulated Mini-Deluxe Phantom. The phantoms were chosen to observe the effects of the proposed prior on the TAFs based on the vicinity and abutments of regions with different activities. Our results show the effectiveness of the proposed iterative reconstruction method, especially in low-count regimes, which provides better uniformity within each region of activity, significant reduction of spatiotemporal variations caused by noise, and sharper separation between different regions of activity than expectation maximization and an MAP method employing a more traditional Gibbs prior.

  10. A Kernel Density Estimator-Based Maximum A Posteriori Image Reconstruction Method for Dynamic Emission Tomography Imaging.

    PubMed

    Ihsani, Alvin; Farncombe, Troy

    2016-03-25

    A novel maximum a posteriori (MAP) method for dynamic SPECT image reconstruction is proposed. The prior probability is modelled as a multivariate kernel density estimator (KDE), effectively modelling the prior probability nonparametrically, with the aim of reducing the effects of artifacts arising from inconsistencies in projection measurements in lowcount regimes where projections are dominated by noise. The proposed prior spatially and temporally limits the variation of time-activity functions (TAF) and "attracts" similar TAFs together. The similarity between TAFs is determined by the spatial and range scaling parameters of the KDE-like prior. The resulting iterative image reconstruction method is evaluated using two simulated phantoms, namely the XCAT heart phantom and a simulated Mini-Deluxe PhantomTM. The phantoms were chosen to observe the effects of the proposed prior on the TAFs based on the vicinity and abutments of regions with different activities. Our results show the effectiveness of the proposed iterative reconstruction method, especially in low-count regimes, which provides better uniformity within each region of activity, significant reduction of spatio-temporal variations caused by noise, and sharper separation between different regions of activity than expectation maximization and a MAP method employing a more "traditional" Gibbs prior.

  11. Posteriori error estimation of h-p finite element approximations of frictional contact problems

    NASA Astrophysics Data System (ADS)

    Lee, C. Y.; Oden, J. T.

    1994-03-01

    Dynamic and static fractional contact problems are described using the normal compliance law on the contact boundary. Dynamic problems are recast into quasistatic problems by time discretization. An a posteriori error estimator is developed for nonlinear elliptic equation of corresponding static or quasistatic problems. The a posteriori error estimator is applied to a frictionless case and extended to frictional contact problems. An adaptive strategy is introduced and h-p finite element meshes are obtained through a procedure based on a priori and a posteriori error estimations. Numerical examples are given to support the theoretical results.

  12. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  13. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  14. A Feedback Finite Element Method with a Posteriori Error Estimation. Part 1. The Finite Element Method and Some Basic Properties of the A Posteriori Error Estimator.

    DTIC Science & Technology

    1984-10-01

    Mesztenyi, W. Szymczak , FEARS User’s Manual for Univac 1100. Tech. Note BN-991, Institute for Physical Science and Technology, University of Maryland...W. Szymczak , FEARS Details of Mathematical Formulation, Tech. Note BN-994, Institute for Physical Science and Technology, University of Maryland

  15. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-07-06

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  16. Analysis of the Efficiency of an A-Posteriori Error Estimator for Linear Triangular Finite Elements

    DTIC Science & Technology

    1991-06-01

    Release 1.0, NOETIC Tech. Corp., St. Louis, Missouri, 1985. [28] R. VERFURTH, FEMFLOW-user guide. Version 1, Report, Universitiit Zirich, 1989. [29] R... study and research for foreign students in numerical mathematics who are supported by foreign governments or exchange agencies (Fulbright, etc

  17. A Posteriori Error Estimation of Adaptive Finite Difference Schemes for Hyperbolic Systems

    DTIC Science & Technology

    1988-06-01

    scheme have been studied by Ciment (ref 24), Fritts (ref 25), Hoffman (ref 26), Osher.- and Sanders (ref 27), Sanders (ref 28), and Mastin (ref 29...Methods for Partial Differential Equations, SIAM, Philadelphia, 1983. 24. Ciment , M., "Stable Difference Schemes With Uneven Mesh Spacings," Math. Comp

  18. Efficient computation of the maximum a posteriori path and parameter estimation in integrate-and-fire and more general state-space models.

    PubMed

    Koyama, Shinsuke; Paninski, Liam

    2010-08-01

    A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton-Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.

  19. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  20. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  1. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  2. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  3. Bayesian Error Estimation Functionals

    NASA Astrophysics Data System (ADS)

    Jacobsen, Karsten W.

    The challenge of approximating the exchange-correlation functional in Density Functional Theory (DFT) has led to the development of numerous different approximations of varying accuracy on different calculated properties. There is therefore a need for reliable estimation of prediction errors within the different approximation schemes to DFT. The Bayesian Error Estimation Functionals (BEEF) have been developed with this in mind. The functionals are constructed by fitting to experimental and high-quality computational databases for molecules and solids including chemisorption and van der Waals systems. This leads to reasonably accurate general-purpose functionals with particual focus on surface science. The fitting procedure involves considerations on how to combine different types of data, and applies Tikhonov regularization and bootstrap cross validation. The methodology has been applied to construct GGA and metaGGA functionals with and without inclusion of long-ranged van der Waals contributions. The error estimation is made possible by the generation of not only a single functional but through the construction of a probability distribution of functionals represented by a functional ensemble. The use of the functional ensemble is illustrated on compound heat of formation and by investigations of the reliability of calculated catalytic ammonia synthesis rates.

  4. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  5. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  6. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  7. Stress Recovery and Error Estimation for Shell Structures

    NASA Technical Reports Server (NTRS)

    Yazdani, A. A.; Riggs, H. R.; Tessler, A.

    2000-01-01

    The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

  8. A posteriori correction for source decay in 3D bioluminescent source localization using multiview measured data

    NASA Astrophysics Data System (ADS)

    Sun, Li; Wang, Pu; Tian, Jie; Liu, Dan; Wang, Ruifang

    2009-02-01

    As a novel optical molecular imaging technique, bioluminescence tomography (BLT) can be used to monitor the biological activities non-invasively at the cellular and molecular levels. In most of known BLT studies, however, the time variation of the bioluminescent source is neglected. It gives rise to the inconsistent views during the multiview continuous wave measurement. In other words, the real measured data from different measured views come from 'different' bioluminescent sources. It could bring large errors in bioluminescence reconstruction. In this paper, a posteriori correction strategy for adaptive FEM-based reconstruction is proposed and developed. The method helps to improve the source localization considering the bioluminescent energy variance during the multiview measurement. In the method, the correction for boundary signals by means of a posteriori correction strategy, which adopts the energy ratio of measured data in the overlapping domains between the adjacent measurements as the correcting factor, can eliminate the effect of the inconsistent views. Then the adaptive mesh refinement with a posteriori error estimation helps to improve the precision and efficiency of BLT reconstruction. In addition, a priori permissible source region selection based on the surface measured data further reduces the ill-posedness of BLT and enhances numerical stability. Finally, three-dimension numerical simulations using the heterogeneous phantom are performed. The numerically measured data is generated by Monte Carlo (MC) method which is known as the Gold standard and can avoid the inverse crime. The reconstructed result with correction shows more accuracy compared to that without correction.

  9. Error Estimates for Mixed Methods.

    DTIC Science & Technology

    1979-03-01

    This paper presents abstract error estimates for mixed methods for the approximate solution of elliptic boundary value problems. These estimates are...then applied to obtain quasi-optimal error estimates in the usual Sobolev norms for four examples: three mixed methods for the biharmonic problem and a mixed method for 2nd order elliptic problems. (Author)

  10. Mars gravitational field estimation error

    NASA Technical Reports Server (NTRS)

    Compton, H. R.; Daniels, E. F.

    1972-01-01

    The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

  11. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We

  12. Improved bit error rate estimation over experimental optical wireless channels

    NASA Astrophysics Data System (ADS)

    El Tabach, Mamdouh; Saoudi, Samir; Tortelier, Patrick; Bouchet, Olivier; Pyndiah, Ramesh

    2009-02-01

    As a part of the EU-FP7 R&D programme, the OMEGA project (hOME Gigabit Access) aims at bridging the gap between wireless terminals and wired backbone network in homes, providing high bit rate connectivity to users. Beside radio frequencies, the wireless links will use Optical Wireless (OW) communications. To guarantee high performance and quality of service in real-time, our system needs techniques to approximate the Bit Error Probability (BEP) with a reasonable training sequence. Traditionally, the BEP is approximated by the Bit Error Rate (BER) measured by counting the number of errors within a given sequence of bits. For small BERs, required sequences are huge and may prevent real-time estimation. In this paper, methods to estimate BER using Probability Density Function (PDF) estimation are presented. Two a posteriori techniques based on Parzen estimator or constrained Gram-Charlier series expansion are adapted and applied to OW communications. Aided by simulations, comparison is done over experimental optical channels. We show that, for different scenarios, such as optical multipath distortion or a well designed Code Division Multiple Access (CDMA) system, this approach outperforms the counting method and yields to better results with a relatively small training sequence.

  13. Rigorous a posteriori assessment of accuracy in EMG decomposition.

    PubMed

    McGill, Kevin C; Marateb, Hamid R

    2011-02-01

    If electromyography (EMG) decomposition is to be a useful tool for scientific investigation, it is essential to know that the results are accurate. Because of background noise, waveform variability, motor-unit action potential (MUAP) indistinguishability, and perplexing superpositions, accuracy assessment is not straightforward. This paper presents a rigorous statistical method for assessing decomposition accuracy based only on evidence from the signal itself. The method uses statistical decision theory in a Bayesian framework to integrate all the shape- and firing-time-related information in the signal to compute an objective a posteriori measure of confidence in the accuracy of each discharge in the decomposition. The assessment is based on the estimated statistical properties of the MUAPs and noise and takes into account the relative likelihood of every other possible decomposition. The method was tested on 3 pairs of real EMG signals containing 4-7 active MUAP trains per signal that had been decomposed by a human expert. It rated 97% of the identified MUAP discharges as accurate to within ± 0.5 ms with a confidence level of 99%, and detected six decomposition errors. Cross-checking between signal pairs verified all but two of these assertions. These results demonstrate that the approach is reliable and practical for real EMG signals.

  14. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  15. Least Absolute Relative Error Estimation.

    PubMed

    Chen, Kani; Guo, Shaojun; Lin, Yuanyuan; Ying, Zhiliang

    2010-01-01

    Multiplicative regression model or accelerated failure time model, which becomes linear regression model after logarithmic transformation, is useful in analyzing data with positive responses, such as stock prices or life times, that are particularly common in economic/financial or biomedical studies. Least squares or least absolute deviation are among the most widely used criterions in statistical estimation for linear regression model. However, in many practical applications, especially in treating, for example, stock price data, the size of relative error, rather than that of error itself, is the central concern of the practitioners. This paper offers an alternative to the traditional estimation methods by considering minimizing the least absolute relative errors for multiplicative regression models. We prove consistency and asymptotic normality and provide an inference approach via random weighting. We also specify the error distribution, with which the proposed least absolute relative errors estimation is efficient. Supportive evidence is shown in simulation studies. Application is illustrated in an analysis of stock returns in Hong Kong Stock Exchange.

  16. Time Required to Compute A Posteriori Probabilities,

    DTIC Science & Technology

    The paper discusses the time required to compute a posteriori probabilities using Bayes ’ Theorem . In a two-hypothesis example it is shown that, to... Bayes ’ Theorem as the group operation. Winograd’s results concerning the lower bound on the time required to perform a group operation on a finite group using logical circuitry are therefore applicable. (Author)

  17. [ETHICAL PRINCIPALS AND A POSTERIORI JUSTIFICATIONS].

    PubMed

    Heintz, Monica

    2015-12-01

    It is difficult to conceive that the human being, while being the same everywhere, could be cared for in such different ways in other societies. Anthropologists acknowledge that the diversity of cultures implies a diversity of moral values, thus that in a multicultural society the individual could draw upon different moral frames to justify the peculiarities of her/his demand of care. But how could we determine what is the moral frame that catalyzes behaviour while all we can record are a posteriori justifications of actions? In most multicultural societies where several moralframes coexist, there is an implicit hierarchy between ethical systems derived from a hierarchy of power which falsifies these a posteriori justifications. Moreover anthropologists often fail to acknowledge that individual behaviour does not always reflect individual values, but is more often the result of negotiations between the moralframes available in society and her/his own desires and personal experience. This is certainly due to the difficulty to account for a dynamic and complex interplay of moral values that cannot be analysed as a system. The impact of individual experience on the way individuals give or receive care could also be only weakly linked to a moral system even when this reference comes up explicitly in the a posteriori justifications.

  18. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  19. An a posteriori-driven adaptive Mixed High-Order method with application to electrostatics

    NASA Astrophysics Data System (ADS)

    Di Pietro, Daniele A.; Specogna, Ruben

    2016-12-01

    In this work we propose an adaptive version of the recently introduced Mixed High-Order method and showcase its performance on a comprehensive set of academic and industrial problems in computational electromagnetism. The latter include, in particular, the numerical modeling of comb-drive and MEMS devices. Mesh adaptation is driven by newly derived, residual-based error estimators. The resulting method has several advantageous features: It supports fairly general meshes, it enables arbitrary approximation orders, and has a moderate computational cost thanks to hybridization and static condensation. The a posteriori-driven mesh refinement is shown to significantly enhance the performance on problems featuring singular solutions, allowing to fully exploit the high-order of approximation.

  20. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  1. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  2. Extensions of Goal-Oriented Error Estimation Methods to Simulations of Highly-Nonlinear Response of Shock-Loaded Elastomer-Reinforced Structures

    DTIC Science & Technology

    2005-06-27

    admissible solutions, here a Banach space with norm ‖ · ‖V . Of interest is the value of a functional Q : V → R at solutions u to (1); the quantity of...Texas at Austin Austin, Texas 78712 Abstract This paper describes extensions of goal-oriented methods for a posteriori error estimation and control of...dynamic, large-deformation response of structures composed of strain-rate-sensitive elastomers and elastoplastic materials is developed. To apply the

  3. Frequentist Standard Errors of Bayes Estimators.

    PubMed

    Lee, DongHyuk; Carroll, Raymond J; Sinha, Samiran

    2017-09-01

    Frequentist standard errors are a measure of uncertainty of an estimator, and the basis for statistical inferences. Frequestist standard errors can also be derived for Bayes estimators. However, except in special cases, the computation of the standard error of Bayesian estimators requires bootstrapping, which in combination with Markov chain Monte Carlo (MCMC) can be highly time consuming. We discuss an alternative approach for computing frequentist standard errors of Bayesian estimators, including importance sampling. Through several numerical examples we show that our approach can be much more computationally efficient than the standard bootstrap.

  4. Maximum a posteriori CMB lensing reconstruction

    NASA Astrophysics Data System (ADS)

    Carron, Julien; Lewis, Antony

    2017-09-01

    Gravitational lensing of the cosmic microwave background (CMB) is a valuable cosmological signal that correlates to tracers of large-scale structure and acts as a important source of confusion for primordial B -mode polarization. State-of-the-art lensing reconstruction analyses use quadratic estimators, which are easily applicable to data. However, these estimators are known to be suboptimal, in particular for polarization, and large improvements are expected to be possible for high signal-to-noise polarization experiments. We develop a method and numerical code, lensit, that is able to find efficiently the most probable lensing map, introducing no significant approximations to the lensed CMB likelihood, and applicable to beamed and masked data with inhomogeneous noise. It works by iteratively reconstructing the primordial unlensed CMB using a deflection estimate and its inverse, and removing residual lensing from these maps with quadratic estimator techniques. Roughly linear computational cost is maintained due to fast convergence of iterative searches, combined with the local nature of lensing. The method achieves the maximal improvement in signal to noise expected from analytical considerations on the unmasked parts of the sky. Delensing with this optimal map leads to forecast tensor-to-scalar ratio parameter errors improved by a factor ≃2 compared to the quadratic estimator in a CMB stage IV configuration.

  5. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    DTIC Science & Technology

    2013-08-01

    i)⊗M 0−1(∆y,i)]× [M 0−1(∆x,i)⊗M 10 (∆y,i)], i = L,R, Λ h = M 1−1(∆ΓI ). The mixed finite element (mortar) method reads: Compute phi ∈W hi , uhi ∈ V hi...DL FL −DR FR 0  , (2.7) where we abuse notation to let uhi , p h i , and ξ h denote the vector of nodal values for the finite element...phi ∈W hi , uhi ∈ V hi , ξ h ∈Λ h, i = L,R, satisfying (a−1L u h L,vL)M,T − (phL,∇ · vL)+ 〈PR→L(phR),n · vL〉ΓI =−〈gL,n · vL〉ΓL,M, (∇ ·uhL ,wL) = ( fL

  6. Comparison of minimum-norm maximum likelihood and maximum a posteriori wavefront reconstructions for large adaptive optics systems.

    PubMed

    Béchet, Clémentine; Tallon, Michel; Thiébaut, Eric

    2009-03-01

    The performances of various estimators for wavefront sensing applications such as adaptive optics (AO) are compared. Analytical expressions for the bias and variance terms in the mean squared error (MSE) are derived for the minimum-norm maximum likelihood (MNML) and the maximum a posteriori (MAP) reconstructors. The MAP estimator is analytically demonstrated to yield an optimal trade-off that reduces the MSE, hence leading to a better Strehl ratio. The implications for AO applications are quantified thanks to simulations on 8-m- and 42-m-class telescopes. We show that the MAP estimator can achieve twice as low MSE as MNML methods do. Large AO systems can thus benefit from the high quality of MAP reconstruction in O(n) operations, thanks to the fast fractal iterative method (FrIM) algorithm (Thiébaut and Tallon, submitted to J. Opt. Soc. Am. A).

  7. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  8. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  9. Systematic Error Modeling and Bias Estimation

    PubMed Central

    Zhang, Feihu; Knoll, Alois

    2016-01-01

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386

  10. Systematic Error Modeling and Bias Estimation.

    PubMed

    Zhang, Feihu; Knoll, Alois

    2016-05-19

    This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation.

  11. Error Estimates for Numerical Integration Rules

    ERIC Educational Resources Information Center

    Mercer, Peter R.

    2005-01-01

    The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

  12. Comparative assessment of four a-posteriori uncertainty quantification methods for PIV data

    NASA Astrophysics Data System (ADS)

    Vlachos, Pavlos; Sciacchitano, Andrea; Neal, Douglas; Smith, Barton; Warner, Scott

    2014-11-01

    Particle Image Velocimetry (PIV) is a well-established technique for the measurement of the flow velocity in a two or three dimensional domain. As in any other technique, PIV data are affected by measurement errors, defined as the difference between the measured velocity and its actual value, which is unknown. The objective of uncertainty quantification is estimating an interval that contains the (unknown) actual error magnitude with a certain probability. In the present work, four methods for the a-posteriori uncertainty quantification of PIV data are assessed. The methods are: the uncertainty surface method (Timmins et al., 2012), the particle disparity approach (Sciacchitano et al., 2013; the peak ratio approach (Charonko and Vlachos, 2013) and the correlation statistics method (Wieneke 2014). For the assessment, a dedicated experimental database of a rectangular jet flow has been produced (Neal et al., 2014) where a reference velocity is known with a high degree of confidence. The comparative assessment has shown strengths and weaknesses of the four uncertainty quantification methods under different flow fields and imaging conditions.

  13. Ability Estimation for Conventional Tests.

    ERIC Educational Resources Information Center

    Kim, Jwa K.; Nicewander, W. Alan

    1993-01-01

    Bias, standard error, and reliability of five ability estimators were evaluated using Monte Carlo estimates of the unknown conditional means and variances of the estimators. Results indicate that estimates based on Bayesian modal, expected a posteriori, and weighted likelihood estimators were reasonably unbiased with relatively small standard…

  14. Error estimation in the direct state tomography

    NASA Astrophysics Data System (ADS)

    Sainz, I.; Klimov, A. B.

    2016-10-01

    We show that reformulating the Direct State Tomography (DST) protocol in terms of projections into a set of non-orthogonal bases one can perform an accuracy analysis of DST in a similar way as in the standard projection-based reconstruction schemes, i.e., in terms of the Hilbert-Schmidt distance between estimated and true states. This allows us to determine the estimation error for any measurement strength, including the weak measurement case, and to obtain an explicit analytic form for the average minimum square errors.

  15. On GPS Water Vapour estimation and related errors

    NASA Astrophysics Data System (ADS)

    Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha

    2010-05-01

    costless and practically continuous (every second) with respect to the atmospheric dynamics. The spatial resolution is correlated to the number and spatial distance (i.e. density) of ground fixed stations and in principle can be very high (for sure it is increasing). The problem can reside in the errors made in the decoupling of the various delay components and in the approximation assumed for the computation of the IWV from the wet delay component. Such errors often are "masked" by the use of the available software packages for GPS data processing and, as a consequence, it is easier to find, associated to the final WV products, errors given from a posteriori validation processes rather than derived from rigorous error propagation analyses. In this work we want to present a technique to compute the different components necessary to retrieve WV measurements from the GPS signal, with a critical analysis of all approximations and errors made in the processing procedure also in perspectives of the great opportunity that the European GALILEO system will bring in this field too.

  16. Pointwise error estimates in localization microscopy

    NASA Astrophysics Data System (ADS)

    Lindén, Martin; Ćurić, Vladimir; Amselem, Elias; Elf, Johan

    2017-05-01

    Pointwise localization of individual fluorophores is a critical step in super-resolution localization microscopy and single particle tracking. Although the methods are limited by the localization errors of individual fluorophores, the pointwise localization precision has so far been estimated using theoretical best case approximations that disregard, for example, motion blur, defocus effects and variations in fluorescence intensity. Here, we show that pointwise localization precision can be accurately estimated directly from imaging data using the Bayesian posterior density constrained by simple microscope properties. We further demonstrate that the estimated localization precision can be used to improve downstream quantitative analysis, such as estimation of diffusion constants and detection of changes in molecular motion patterns. Finally, the quality of actual point localizations in live cell super-resolution microscopy can be improved beyond the information theoretic lower bound for localization errors in individual images, by modelling the movement of fluorophores and accounting for their pointwise localization uncertainty.

  17. Coping with dating errors in causality estimation

    NASA Astrophysics Data System (ADS)

    Smirnov, D. A.; Marwan, N.; Breitenbach, S. F. M.; Lechleitner, F.; Kurths, J.

    2017-01-01

    We consider the problem of estimating causal influences between observed processes from time series possibly corrupted by errors in the time variable (dating errors) which are typical in palaeoclimatology, planetary science and astrophysics. “Causality ratio” based on the Wiener-Granger causality is proposed and studied for a paradigmatic class of model systems to reveal conditions under which it correctly indicates directionality of unidirectional coupling. It is argued that in the case of a priori known directionality, the causality ratio allows a characterization of dating errors and observational noise. Finally, we apply the developed approach to palaeoclimatic data and quantify the influence of solar activity on tropical Atlantic climate dynamics over the last two millennia. A stronger solar influence in the first millennium A.D. is inferred. The results also suggest a dating error of about 20 years in the solar proxy time series over the same period.

  18. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    NASA Astrophysics Data System (ADS)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  19. An estimation error bound for pixelated sensing

    NASA Astrophysics Data System (ADS)

    Kreucher, Chris; Bell, Kristine

    2016-05-01

    This paper considers the ubiquitous problem of estimating the state (e.g., position) of an object based on a series of noisy measurements. The standard approach is to formulate this problem as one of measuring the state (or a function of the state) corrupted by additive Gaussian noise. This model assumes both (i) the sensor provides a measurement of the true target (or, alternatively, a separate signal processing step has eliminated false alarms), and (ii) The error source in the measurement is accurately described by a Gaussian model. In reality, however, sensor measurement are often formed on a grid of pixels - e.g., Ground Moving Target Indication (GMTI) measurements are formed for a discrete set of (angle, range, velocity) voxels, and EO imagery is made on (x, y) grids. When a target is present in a pixel, therefore, uncertainty is not Gaussian (instead it is a boxcar function) and unbiased estimation is not generally possible as the location of the target within the pixel defines the bias of the estimator. It turns out that this small modification to the measurement model makes traditional bounding approaches not applicable. This paper discusses pixelated sensing in more detail and derives the minimum mean squared error (MMSE) bound for estimation in the pixelated scenario. We then use this error calculation to investigate the utility of using non-thresholded measurements.

  20. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  1. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection

    PubMed Central

    Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao

    2016-01-01

    The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities. PMID:26828499

  2. Multi-Target Joint Detection and Estimation Error Bound for the Sensor with Clutter and Missed Detection.

    PubMed

    Lian, Feng; Zhang, Guang-Hua; Duan, Zhan-Sheng; Han, Chong-Zhao

    2016-01-28

    The error bound is a typical measure of the limiting performance of all filters for the given sensor measurement setting. This is of practical importance in guiding the design and management of sensors to improve target tracking performance. Within the random finite set (RFS) framework, an error bound for joint detection and estimation (JDE) of multiple targets using a single sensor with clutter and missed detection is developed by using multi-Bernoulli or Poisson approximation to multi-target Bayes recursion. Here, JDE refers to jointly estimating the number and states of targets from a sequence of sensor measurements. In order to obtain the results of this paper, all detectors and estimators are restricted to maximum a posteriori (MAP) detectors and unbiased estimators, and the second-order optimal sub-pattern assignment (OSPA) distance is used to measure the error metric between the true and estimated state sets. The simulation results show that clutter density and detection probability have significant impact on the error bound, and the effectiveness of the proposed bound is verified by indicating the performance limitations of the single-sensor probability hypothesis density (PHD) and cardinalized PHD (CPHD) filters for various clutter densities and detection probabilities.

  3. Hierarchical Boltzmann simulations and model error estimation

    NASA Astrophysics Data System (ADS)

    Torrilhon, Manuel; Sarna, Neeraj

    2017-08-01

    A hierarchical simulation approach for Boltzmann's equation should provide a single numerical framework in which a coarse representation can be used to compute gas flows as accurately and efficiently as in computational fluid dynamics, but a subsequent refinement allows to successively improve the result to the complete Boltzmann result. We use Hermite discretization, or moment equations, for the steady linearized Boltzmann equation for a proof-of-concept of such a framework. All representations of the hierarchy are rotationally invariant and the numerical method is formulated on fully unstructured triangular and quadrilateral meshes using a implicit discontinuous Galerkin formulation. We demonstrate the performance of the numerical method on model problems which in particular highlights the relevance of stability of boundary conditions on curved domains. The hierarchical nature of the method allows also to provide model error estimates by comparing subsequent representations. We present various model errors for a flow through a curved channel with obstacles.

  4. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  5. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  6. Error estimation and structural shape optimization

    NASA Astrophysics Data System (ADS)

    Song, Xiaoguang

    This work is concerned with three topics: error estimation, data smoothing process and the structural shape optimization design and analysis. In particular, the superconvergent stress recovery technique, the dual kriging B-spline curve and surface fittings, the development and the implementation of a novel node-based numerical shape optimization package are addressed. Concept and new technique of accurate stress recovery are developed and applied in finding the lateral buckling parameters of plate structures. Some useful conclusions are made for the finite element Reissner-Mindlin plate solutions. The powerful dual kriging B-spline fitting technique is reviewed and a set of new compact formulations are developed. This data smoothing method is then applied in accurately recovering curves and surfaces. The new node-based shape optimization method is based on the consideration that the critical stress and displacement constraints are generally located along or near the structural boundary. The method puts the maximum weights on the selected boundary nodes, referred to as the design points, so that the time-consuming sensitivity analysis is related to the perturbation of only these nodes. The method also allows large shape changes to achieve the optimal shape. The design variables are specified as the moving magnitudes for the prescribed design points that are always located at the structural boundary. Theories, implementations and applications are presented for various modules by which the package is constructed. Especially, techniques involving finite element error estimation, adaptive mesh generation, design sensitivity analysis, and data smoothing are emphasized.

  7. Target parameter and error estimation using magnetometry

    NASA Astrophysics Data System (ADS)

    Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.

    The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.

  8. Rigorous Error Estimates for Reynolds' Lubrication Approximation

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon

    2006-11-01

    Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ɛ approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

  9. Nonmarket valuation of water quality in a rural transition economy in Turkey applying an a posteriori bid design

    NASA Astrophysics Data System (ADS)

    Bederli Tümay, Aylin; Brouwer, Roy

    2007-05-01

    In this paper, we investigate the economic benefits associated with public investments in wastewater treatment in one of the special protected areas along Turkey's touristic Mediterranean coast, the Köyceǧiz-Dalyan watershed. The benefits, measured in terms of boatable, fishable, swimmable and drinkable water quality, are estimated using a public survey format following the contingent valuation (CV) method. The study presented here is the first of its kind in Turkey. The study's main objective is to assess public perception, understanding, and valuation of improved wastewater treatment facilities in the two largest population centers in the watershed, facing the same water pollution problems as a result of lack of appropriate wastewater treatment. We test the validity and reliability of the application of the CV methodology to this specific environmental problem in a rural transition economy and evaluate the transferability of the results within the watershed. In order to facilitate willingness to pay (WTP) value elicitation we apply a novel dichotomous choice procedure where bid design takes place a posteriori instead of a priori. The statistical efficiency of different bid vectors is evaluated in terms of the estimated welfare measures' mean square errors using Monte Carlo simulation. The robustness of bid function specification is analyzed through average WTP and standard deviation estimated using parametric and nonparametric methods.

  10. Error estimates of numerical solutions for a cyclic plasticity problem

    NASA Astrophysics Data System (ADS)

    Han, W.

    A cyclic plasticity problem is numerically analyzed in [13], where a sub-optimal order error estimate is shown for a spatially discrete scheme. In this note, we prove an optimal order error estimate for the spatially discrete scheme under the same solution regularity condition. We also derive an error estimate for a fully discrete scheme for solving the plasticity problem.

  11. Maximum a Posteriori (MAP) Estimates for Hyperspectral Image Enhancement

    DTIC Science & Technology

    2004-09-01

    nd s N high resolution multispectral pixels Figure 2.3: Multispectral Data Cube x point processor with an attached vector processor unit (VPU). Each...response matrix s (if provided at all) 29 % should be provided in the spectral domain. If PCA analysis % is to be performed (i.e. pca_mode > 0), s will be...subject to ∑P j=1 sj = 1 and sj ≥ 0 for 1 ≤ j ≤ P , in order to determine a normalized spectral response vector s . The normalized spectral response

  12. Adjusting for radiotelemetry error to improve estimates of habitat use.

    Treesearch

    Scott L. Findholt; Bruce K. Johnson; Lyman L. McDonald; John W. Kern; Alan Ager; Rosemary J. Stussy; Larry D. Bryant

    2002-01-01

    Animal locations estimated from radiotelemetry have traditionally been treated as error-free when analyzed in relation to habitat variables. Location error lowers the power of statistical tests of habitat selection. We describe a method that incorporates the error surrounding point estimates into measures of environmental variables determined from a geographic...

  13. A hardware error estimate for floating-point computations

    NASA Astrophysics Data System (ADS)

    Lang, Tomás; Bruguera, Javier D.

    2008-08-01

    We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However

  14. Estimating IMU heading error from SAR images.

    SciTech Connect

    Doerry, Armin Walter

    2009-03-01

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  15. Prediction and simulation errors in parameter estimation for nonlinear systems

    NASA Astrophysics Data System (ADS)

    Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.

    2010-11-01

    This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.

  16. Standard Error of Empirical Bayes Estimate in NONMEM® VI

    PubMed Central

    Kang, Dongwoo; Houk, Brett E.; Savic, Radojka M.; Karlsson, Mats O.

    2012-01-01

    The pharmacokinetics/pharmacodynamics analysis software NONMEM® output provides model parameter estimates and associated standard errors. However, the standard error of empirical Bayes estimates of inter-subject variability is not available. A simple and direct method for estimating standard error of the empirical Bayes estimates of inter-subject variability using the NONMEM® VI internal matrix POSTV is developed and applied to several pharmacokinetic models using intensively or sparsely sampled data for demonstration and to evaluate performance. The computed standard error is in general similar to the results from other post-processing methods and the degree of difference, if any, depends on the employed estimation options. PMID:22563254

  17. A Note on Confidence Interval Estimation and Margin of Error

    ERIC Educational Resources Information Center

    Gilliland, Dennis; Melfi, Vince

    2010-01-01

    Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

  18. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    NASA Astrophysics Data System (ADS)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  19. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  20. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data

  1. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  2. Anatomical labeling of the circle of willis using maximum a posteriori graph matching.

    PubMed

    Robben, David; Sunaert, Stefan; Thijs, Vincent; Wilms, Guy; Maes, Frederik; Suetens, Paul

    2013-01-01

    A new method for anatomically labeling the vasculature is presented and applied to the Circle of Willis. Our method converts the segmented vasculature into a graph that is matched with an annotated graph atlas in a maximum a posteriori (MAP) way. The MAP matching is formulated as a quadratic binary programming problem which can be solved efficiently. Unlike previous methods, our approach can handle non tree-like vasculature and large topological differences. The method is evaluated in a leave-one-out test on MRA of 30 subjects where it achieves a sensitivity of 93% and a specificity of 85% with an average error of 1.5 mm on matching bifurcations in the vascular graph.

  3. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  4. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  5. Deconvolution Estimation in Measurement Error Models: The R Package decon

    PubMed Central

    Wang, Xiao-Feng; Wang, Bin

    2011-01-01

    Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139

  6. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  7. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  8. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  9. An Ensemble-type Approach to Numerical Error Estimation

    NASA Astrophysics Data System (ADS)

    Ackmann, J.; Marotzke, J.; Korn, P.

    2015-12-01

    The estimation of the numerical error in a specific physical quantity of interest (goal) is of key importance in geophysical modelling. Towards this aim, we have formulated an algorithm that combines elements of the classical dual-weighted error estimation with stochastic methods. Our algorithm is based on the Dual-weighted Residual method in which the residual of the model solution is weighed by the adjoint solution, i.e. by the sensitivities of the goal towards the residual. We extend this method by modelling the residual as a stochastic process. Parameterizing the residual by a stochastic process was motivated by the Mori-Zwanzig formalism from statistical mechanics.Here, we apply our approach to two-dimensional shallow-water flows with lateral boundaries and an eddy viscosity parameterization. We employ different parameters of the stochastic process for different dynamical regimes in different regions. We find that for each region the temporal fluctuations of local truncation errors (discrete residuals) can be interpreted stochastically by a Laplace-distributed random variable. Assuming that these random variables are fully correlated in time leads to a stochastic process that parameterizes a problem-dependent temporal evolution of local truncation errors. The parameters of this stochastic process are estimated from short, near-initial, high-resolution simulations. Under the assumption that the estimated parameters can be extrapolated to the full time window of the error estimation, the estimated stochastic process is proven to be a valid surrogate for the local truncation errors.Replacing the local truncation errors by a stochastic process puts our method within the class of ensemble methods and makes the resulting error estimator a random variable. The result of our error estimator is thus a confidence interval on the error in the respective goal. We will show error estimates for two 2D ocean-type experiments and provide an outlook for the 3D case.

  10. Error Estimation for Reduced Order Models of Dynamical Systems

    SciTech Connect

    Homescu, C; Petzold, L; Serban, R

    2004-01-22

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of small sample statistical condition estimation and error estimation using the adjoint method. Most importantly, the proposed approach allows the assessment of regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  11. Error latency estimation using functional fault modeling

    NASA Technical Reports Server (NTRS)

    Manthani, S. R.; Saxena, N. R.; Robinson, J. P.

    1983-01-01

    A complete modeling of faults at gate level for a fault tolerant computer is both infeasible and uneconomical. Functional fault modeling is an approach where units are characterized at an intermediate level and then combined to determine fault behavior. The applicability of functional fault modeling to the FTMP is studied. Using this model a forecast of error latency is made for some functional blocks. This approach is useful in representing larger sections of the hardware and aids in uncovering system level deficiencies.

  12. Bias in parameter estimation of form errors

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min

    2014-09-01

    The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.

  13. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  14. Preliminary estimates of radiosonde thermistor errors

    NASA Technical Reports Server (NTRS)

    Schmidlin, F. J.; Luers, J. K.; Huffman, P. D.

    1986-01-01

    Radiosonde temperature measurements are subject to errors, not the least of which is the effect of long- and short-wave radiation. Methods of adjusting the daytime temperatures to a nighttime equivalent are used by some analysis centers. Other than providing consistent observations for analysis this procedure does not provide a true correction. The literature discusses the problem of radiosonde temperature errors but it is not apparent what effort, if any, has been taken to quantify these errors. To accomplish the latter, radiosondes containing multiple thermistors with different coatings were flown at Goddard Space Flight Center/Wallops Flight Facility. The coatings employed had different spectral characteristics and, therefore, different adsorption and emissivity properties. Discrimination of the recorded temperatures enabled day and night correction values to be determined for the US standard white-coated rod thermistor. The correction magnitudes are given and a comparison of US measured temperatures before and after correction are compared with temperatures measured with the Vaisala radiosonde. The corrections are in the proper direction, day and night, and reduce day-night temperature differences to less than 0.5 C between surface and 30 hPa. The present uncorrected temperatures used with the Viz radiosonde have day-night differences that exceed 1 C at levels below 90 hPa. Additional measurements are planned to confirm these preliminary results and determine the solar elevation angle effect on the corrections. The technique used to obtain the corrections may also be used to recover a true absolute value and might be considered a valuable contribution to the meteorological community for use as a reference instrument.

  15. MPDATA error estimator for mesh adaptivity

    NASA Astrophysics Data System (ADS)

    Szmelter, Joanna; Smolarkiewicz, Piotr K.

    2006-04-01

    In multidimensional positive definite advection transport algorithm (MPDATA) the leading error as well as the first- and second-order solutions are known explicitly by design. This property is employed to construct refinement indicators for mesh adaptivity. Recent progress with the edge-based formulation of MPDATA facilitates the use of the method in an unstructured-mesh environment. In particular, the edge-based data structure allows for flow solvers to operate on arbitrary hybrid meshes, thereby lending itself to implementations of various mesh adaptivity techniques. A novel unstructured-mesh nonoscillatory forward-in-time (NFT) solver for compressible Euler equations is used to illustrate the benefits of adaptive remeshing as well as mesh movement and enrichment for the efficacy of MPDATA-based flow solvers. Validation against benchmark test cases demonstrates robustness and accuracy of the approach.

  16. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  17. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  18. Maximum a posteriori video super-resolution using a new multichannel image prior.

    PubMed

    Belekos, Stefanos P; Galatsanos, Nikolaos P; Katsaggelos, Aggelos K

    2010-06-01

    Super-resolution (SR) is the term used to define the process of estimating a high-resolution (HR) image or a set of HR images from a set of low-resolution (LR) observations. In this paper we propose a class of SR algorithms based on the maximum a posteriori (MAP) framework. These algorithms utilize a new multichannel image prior model, along with the state-of-the-art single channel image prior and observation models. A hierarchical (two-level) Gaussian nonstationary version of the multichannel prior is also defined and utilized within the same framework. Numerical experiments comparing the proposed algorithms among themselves and with other algorithms in the literature, demonstrate the advantages of the adopted multichannel approach.

  19. Unbiased bootstrap error estimation for linear discriminant analysis.

    PubMed

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  20. Electron transport in magnetrons by a posteriori Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costin, C.; Minea, T. M.; Popa, G.

    2014-02-01

    Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.

  1. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  2. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  3. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  4. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  5. Accurate absolute GPS positioning through satellite clock error estimation

    NASA Astrophysics Data System (ADS)

    Han, S.-C.; Kwon, J. H.; Jekeli, C.

    2001-05-01

    An algorithm for very accurate absolute positioning through Global Positioning System (GPS) satellite clock estimation has been developed. Using International GPS Service (IGS) precise orbits and measurements, GPS clock errors were estimated at 30-s intervals. Compared to values determined by the Jet Propulsion Laboratory, the agreement was at the level of about 0.1 ns (3 cm). The clock error estimates were then applied to an absolute positioning algorithm in both static and kinematic modes. For the static case, an IGS station was selected and the coordinates were estimated every 30 s. The estimated absolute position coordinates and the known values had a mean difference of up to 18 cm with standard deviation less than 2 cm. For the kinematic case, data obtained every second from a GPS buoy were tested and the result from the absolute positioning was compared to a differential GPS (DGPS) solution. The mean differences between the coordinates estimated by the two methods are less than 40 cm and the standard deviations are less than 25 cm. It was verified that this poorer standard deviation on 1-s position results is due to the clock error interpolation from 30-s estimates with Selective Availability (SA). After SA was turned off, higher-rate clock error estimates (such as 1 s) could be obtained by a simple interpolation with negligible corruption. Therefore, the proposed absolute positioning technique can be used to within a few centimeters' precision at any rate by estimating 30-s satellite clock errors and interpolating them.

  6. Estimating the sources of motor errors for adaptation and generalization

    PubMed Central

    Berniker, Max; Kording, Konrad

    2009-01-01

    Motor adaptation is usually defined as the process by which our nervous system produces accurate movements, while the properties of our bodies and our environment continuously change. Numerous experimental and theoretical studies have characterized this process by assuming that the nervous system uses internal models to compensate for motor errors. Here we extend these approaches and construct a probabilistic model that not only compensates for motor errors but estimates the sources of these errors. These estimates dictate how the nervous system should generalize. For example, estimated changes of limb properties will affect movements across the workspace but not movements with the other limb. We provide evidence that many movement generalization phenomena emerge from a strategy by which the nervous system estimates the sources of our motor errors. PMID:19011624

  7. Using doppler radar images to estimate aircraft navigational heading error

    DOEpatents

    Doerry, Armin W [Albuquerque, NM; Jordan, Jay D [Albuquerque, NM; Kim, Theodore J [Albuquerque, NM

    2012-07-03

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  8. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  9. Laser Doppler anemometer measurements using nonorthogonal velocity components: error estimates.

    PubMed

    Orloff, K L; Snyder, P K

    1982-01-15

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  10. Laser Doppler anemometer measurements using nonorthogonal velocity components - Error estimates

    NASA Technical Reports Server (NTRS)

    Orloff, K. L.; Snyder, P. K.

    1982-01-01

    Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

  11. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  12. PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG

    SciTech Connect

    Mighell, Kenneth J.; Plavchan, Peter

    2013-06-15

    The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  14. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  15. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  16. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  17. Visual field test simulation and error in threshold estimation.

    PubMed Central

    Spenceley, S E; Henson, D B

    1996-01-01

    AIM: To establish, via computer simulation, the effects of patient response variability and staircase starting level upon the accuracy and repeatability of static full threshold visual field tests. METHOD: Patient response variability, defined by the standard deviation of the frequency of seeing versus stimulus intensity curve, is varied from 0.5 to 20 dB (in steps of 0.5 dB) with staircase starting levels ranging from 30 dB below to 30 dB above the patient's threshold (in steps of 10 dB). Fifty two threshold estimates are derived for each condition and the error of each estimate calculated (difference between the true threshold and the threshold estimate derived from the staircase procedure). The mean and standard deviation of the errors are then determined for each condition. The results from a simulated quadrantic defect (response variability set to typical values for a patient with glaucoma) are presented using two different algorithms. The first corresponds with that normally used when performing a full threshold examination while the second uses results from an earlier simulated full threshold examination for the staircase starting values. RESULTS: The mean error in threshold estimates was found to be biased towards the staircase starting level. The extent of the bias was dependent upon patient response variability. The standard deviation of the error increased both with response variability and staircase starting level. With the routinely used full threshold strategy the quadrantic defect was found to have a large mean error in estimated threshold values and an increase in the standard deviation of the error along the edge of the defect. When results from an earlier full threshold test are used as staircase starting values this error and increased standard deviation largely disappeared. CONCLUSION: The staircase procedure widely used in threshold perimetry increased the error and the variability of threshold estimates along the edges of defects. Using

  18. Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Sass, Daniel A.

    2010-01-01

    Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

  19. Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Sass, Daniel A.

    2010-01-01

    Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

  20. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  1. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  2. Sensitivity analysis of DOA estimation algorithms to sensor errors

    NASA Astrophysics Data System (ADS)

    Li, Fu; Vaccaro, Richard J.

    1992-07-01

    A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

  3. 4D maximum a posteriori reconstruction in dynamic SPECT using a compartmental model-based prior.

    PubMed

    Kadrmas, D J; Gullberg, G T

    2001-05-01

    A 4D ordered-subsets maximum a posteriori (OSMAP) algorithm for dynamic SPECT is described which uses a temporal prior that constrains each voxel's behaviour in time to conform to a compartmental model. No a priori limitations on kinetic parameters are applied; rather, the parameter estimates evolve as the algorithm iterates to a solution. The estimated parameters and time-activity curves are used within the reconstruction algorithm to model changes in the activity distribution as the camera rotates, avoiding artefacts due to inconsistencies of data between projection views. This potentially allows for fewer, longer-duration scans to be used and may have implications for noise reduction. The algorithm was evaluated qualitatively using dynamic 99mTc-teboroxime SPECT scans in two patients, and quantitatively using a series of simulated phantom experiments. The OSMAP algorithm resulted in images with better myocardial uniformity and definition, gave time-activity curves with reduced noise variations, and provided wash-in parameter estimates with better accuracy and lower statistical uncertainty than those obtained from conventional ordered-subsets expectation-maximization (OSEM) processing followed by compartmental modelling. The new algorithm effectively removed the bias in k21 estimates due to inconsistent projections for sampling schedules as slow as 60 s per timeframe, but no improvement in wash-out parameter estimates was observed in this work. The proposed dynamic OSMAP algorithm provides a flexible framework which may benefit a variety of dynamic tomographic imaging applications.

  4. Errors in NAPL volume estimates due to systematic measurement errors during partitioning tracer tests.

    PubMed

    Brooks, Michael C; Wise, William R

    2005-09-15

    During moment-based analyses of partitioning tracer tests, systematic errors in volume and concentration measurements propagate to yield errors in the saturation and volume estimates for nonaqueous phase liquid (NAPL). Derived expressions could be applied to help practitioners bracket their estimates of NAPL saturation and volume obtained from such tests. In practice, many of these effects may be overshadowed by other complications experienced in the field. Errors are propagated for systematic constant (offset) volume, proportional volume, and constant (offset) concentration errors. Previous efforts to quantify the impact of these errors were predicated upon the specific assumption that nonpartitioning and partitioning masses were equal. The current work relaxes that assumption and is therefore more general in scope. Through the use of nondimensional concentration, systematic proportional concentration errors do not affect the accuracy of the method. Specific consideration needs to be given to accurate flow measurements and minimizing baseline concentration errors when performing partitioning tracer tests in order to prevent the propagation of systematic errors.

  5. MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

    SciTech Connect

    R. ESTEP; ET AL

    2000-06-01

    Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

  6. Error Estimation for Reduced Order Models of Dynamical systems

    SciTech Connect

    Homescu, C; Petzold, L R; Serban, R

    2003-12-16

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of the small sample statistical condition estimation method and of error estimation using the adjoint method. More importantly, the proposed approach allows the assessment of so-called regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. This question is particularly important for applications in which reduced models are used not just to approximate the solution to the system that provided the data used in constructing the reduced model, but rather to approximate the solution of systems perturbed from the original one. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  7. Sampling errors in satellite estimates of tropical rain

    NASA Technical Reports Server (NTRS)

    Mcconnell, Alan; North, Gerald R.

    1987-01-01

    The GATE rainfall data set is used in a statistical study to estimate the sampling errors that might be expected for the type of snapshot sampling that a low earth-orbiting satellite makes. For averages over the entire 400-km square and for the duration of several weeks, strong evidence is found that sampling errors less than 10 percent can be expected in contributions from each of four rain rate categories which individually account for about one quarter of the total rain.

  8. Estimation of rod scale errors in geodetic leveling

    USGS Publications Warehouse

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  9. Sampling errors in satellite estimates of tropical rain

    NASA Technical Reports Server (NTRS)

    Mcconnell, Alan; North, Gerald R.

    1987-01-01

    The GATE rainfall data set is used in a statistical study to estimate the sampling errors that might be expected for the type of snapshot sampling that a low earth-orbiting satellite makes. For averages over the entire 400-km square and for the duration of several weeks, strong evidence is found that sampling errors less than 10 percent can be expected in contributions from each of four rain rate categories which individually account for about one quarter of the total rain.

  10. CME Velocity and Acceleration Error Estimates Using the Bootstrap Method

    NASA Astrophysics Data System (ADS)

    Michalek, Grzegorz; Gopalswamy, Nat; Yashiro, Seiji

    2017-08-01

    The bootstrap method is used to determine errors of basic attributes of coronal mass ejections (CMEs) visually identified in images obtained by the Solar and Heliospheric Observatory (SOHO) mission's Large Angle and Spectrometric Coronagraph (LASCO) instruments. The basic parameters of CMEs are stored, among others, in a database known as the SOHO/LASCO CME catalog and are widely employed for many research studies. The basic attributes of CMEs ( e.g. velocity and acceleration) are obtained from manually generated height-time plots. The subjective nature of manual measurements introduces random errors that are difficult to quantify. In many studies the impact of such measurement errors is overlooked. In this study we present a new possibility to estimate measurements errors in the basic attributes of CMEs. This approach is a computer-intensive method because it requires repeating the original data analysis procedure several times using replicate datasets. This is also commonly called the bootstrap method in the literature. We show that the bootstrap approach can be used to estimate the errors of the basic attributes of CMEs having moderately large numbers of height-time measurements. The velocity errors are in the vast majority small and depend mostly on the number of height-time points measured for a particular event. In the case of acceleration, the errors are significant, and for more than half of all CMEs, they are larger than the acceleration itself.

  11. Verification of unfold error estimates in the unfold operator code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  12. Verification of unfold error estimates in the unfold operator code

    NASA Astrophysics Data System (ADS)

    Fehl, D. L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.

  13. PDV Uncertainty Estimation & Methods Comparison

    SciTech Connect

    Machorro, E.

    2011-11-01

    Several methods are presented for estimating the rapidly changing instantaneous frequency of a time varying signal that is contaminated by measurement noise. Useful a posteriori error estimates for several methods are verified numerically through Monte Carlo simulation. However, given the sampling rates of modern digitizers, sub-nanosecond variations in velocity are shown to be reliably measurable in most (but not all) cases. Results support the hypothesis that in many PDV regimes of interest, sub-nanosecond resolution can be achieved.

  14. Error estimation and adaptivity in Navier-Stokes incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, J.; Zhu, J. Z.; Szmelter, J.; Zienkiewicz, O. C.

    1990-07-01

    An adaptive remeshing procedure for solving Navier-Stokes incompressible fluid flow problems is presented in this paper. This procedure has been implemented using the error estimator developed by Zienkiewicz and Zhu (1987, 1989) and a semi-implicit time-marching scheme for Navier-Stokes flow problems (Zienkiewicz et al. 1990). Numerical examples are presented, showing that the error estimation and adaptive procedure are capable of monitoring the flow field, updating the mesh when necessary, and providing nearly optimal meshes throughout the calculation, thus making the solution reliable and the computation economical and efficient.

  15. On Evaluation of Recharge Model Uncertainty: a Priori and a Posteriori

    SciTech Connect

    Ming Ye; Karl Pohlmann; Jenny Chapman; David Shafer

    2006-01-30

    Hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Hydrologic analyses typically rely on a single conceptual-mathematical model, which ignores conceptual model uncertainty and may result in bias in predictions and under-estimation of predictive uncertainty. This study is to assess conceptual model uncertainty residing in five recharge models developed to date by different researchers based on different theories for Nevada and Death Valley area, CA. A recently developed statistical method, Maximum Likelihood Bayesian Model Averaging (MLBMA), is utilized for this analysis. In a Bayesian framework, the recharge model uncertainty is assessed, a priori, using expert judgments collected through an expert elicitation in the form of prior probabilities of the models. The uncertainty is then evaluated, a posteriori, by updating the prior probabilities to estimate posterior model probability. The updating is conducted through maximum likelihood inverse modeling by calibrating the Death Valley Regional Flow System (DVRFS) model corresponding to each recharge model against observations of head and flow. Calibration results of DVRFS for the five recharge models are used to estimate three information criteria (AIC, BIC, and KIC) used to rank and discriminate these models. Posterior probabilities of the five recharge models, evaluated using KIC, are used as weights to average head predictions, which gives posterior mean and variance. The posterior quantities incorporate both parametric and conceptual model uncertainties.

  16. Application of variance components estimation to calibrate geoid error models.

    PubMed

    Guo, Dong-Mei; Xu, Hou-Ze

    2015-01-01

    The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model.

  17. Estimating errors in IceBridge freeboard at ICESat Scales

    NASA Astrophysics Data System (ADS)

    Prado, D. W.; Xie, H.; Ackley, S. F.; Wang, X.

    2014-12-01

    The Airborne Topographic Mapping (ATM) system flown on NASA Operation IceBridge allows for estimation of sea ice thickness from surface elevations in the Bellingshausen - Amundsen Seas. The estimation of total freeboard is based on the accuracy of local sea level estimations and the footprint size. We used the high density of ATM L1B (~1 m footprint) observations at varying spatial resolutions to assess errors associated with averaging over larger footprints and deviation of local sea level from the WGS-84 geoid over longer segment lengths The ATM data sets allow for a comparison between IceBridge (2009-2014) and ICESat(2003-2009)derived freeboards by comparing the ATM L2 (~70 m footprint) data, similar to the IceSAT footprint. While The average freeboard estimates for the L2 data in 2009 underestimate total freeboard by only 5 cm at 5 km segment lengths the error increases to 49 cm at the 50 km segment lengths typical of IceSAT analyses. Since the error in freeboard estimation greatly increases at the segment lengths used for IceSAT analyses, some caution may be required in comparing IceSAT thickness estimates with later IceBridge estimates over the same region.

  18. Analytical formula for three points sinusoidal signals amplitude estimation errors

    NASA Astrophysics Data System (ADS)

    Nicolae Vizireanu, Dragos; Viorica Halunga, Simona

    2012-01-01

    In this note, we show that the amplitude estimation of sinusoidal signals proposed in Wu and Hong [Wu, S.T., and Hong, J.L. (2010), 'Five-point Amplitude Estimation of Sinusoidal Signals: With Application to LVDT Signal Conditioning', IEEE Transactions on Instrumentation and Measurement, 59, 623-630] is a particular case of Vizireanu and Halunga [Vizireanu, D.N, and Halunga, S.V. (2011), 'Single Sine Wave Parameters Estimation Method Based on Four Equally Spaced Samples', International Journal of Electronics, 98(7), pp. 941-948]. An analytical formula for amplitude estimation errors as effects of sampling period deviation is obtained.

  19. Error propagation and scaling for tropical forest biomass estimates.

    PubMed Central

    Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

    2004-01-01

    The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093

  20. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  1. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    PubMed

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  2. Real-Time Estimation Of Aiming Error Of Spinning Antenna

    NASA Technical Reports Server (NTRS)

    Dolinsky, Shlomo

    1992-01-01

    Spinning-spacecraft dynamics and amplitude variations in communications links studied from received-signal fluctuations. Mathematical model and associated analysis procedure provide real-time estimates of aiming error of remote rotating transmitting antenna radiating constant power in narrow, pencillike beam from spinning platform, and current amplitude of received signal. Estimates useful in analyzing and enhancing calibration of communication system, and in analyzing complicated dynamic effects in spinning platform and antenna-aiming mechanism.

  3. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  4. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  5. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  6. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  7. Insights Into the Robustness of Minimum Error Entropy Estimation.

    PubMed

    Chen, Badong; Xing, Lei; Xu, Bin; Zhao, Haiquan; Principe, Jose C

    2016-12-22

    The minimum error entropy (MEE) is an important and highly effective optimization criterion in information theoretic learning (ITL). For regression problems, MEE aims at minimizing the entropy of the prediction error such that the estimated model preserves the information of the data generating system as much as possible. In many real world applications, the MEE estimator can outperform significantly the well-known minimum mean square error (MMSE) estimator and show strong robustness to noises especially when data are contaminated by non-Gaussian (multimodal, heavy tailed, discrete valued, and so on) noises. In this brief, we present some theoretical results on the robustness of MEE. For a one-parameter linear errors-in-variables (EIV) model and under some conditions, we derive a region that contains the MEE solution, which suggests that the MEE estimate can be very close to the true value of the unknown parameter even in presence of arbitrarily large outliers in both input and output variables. Theoretical prediction is verified by an illustrative example.

  8. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  9. Uplink channel estimation error for large scale MIMO system

    NASA Astrophysics Data System (ADS)

    Albdran, Saleh; Alshammari, Ahmad; Matin, Mohammad

    2016-09-01

    The high demand on the wireless networks and the need for higher data rates are the motivation to develop new technologies. Recently, the idea of using large-scale MIMO systems has grabbed great attention from the researchers due to its high spectral and energy efficiency. In this paper, we analyze the UL channel estimation error using large number of antennas in the base station where the UL channel is based on predefined pilot signal. By making a comparison between the identified UL pilot signal and the received UL signal we can get the realization of the channel. We choose to deal with one cell scenario where the effect of inter-cell interference is eliminated for the sake of studying simple approach. While the number of antennas is very large in the base station side, we choose to have one antennal in the user terminal side. We choose to have two models to generate the channel covariance matrix includes one-ring model and exponential correlation model. Figures of channel estimation error are generated where the performance of the mean square error MSE per antenna is presented as a function signal to noise ratio SNR. The simulation results show that the higher the SNR the better the performance. Furthermore, the affect of the pilot length on the channel estimation error is studied where two different covariance models are used to see the impact of the two cases. In the two cases, the increase of the pilot length has improved the estimation accuracy.

  10. Condition and Error Estimates in Numerical Matrix Computations

    SciTech Connect

    Konstantinov, M. M.; Petkov, P. H.

    2008-10-30

    This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

  11. DEB: definite error bounded tangent estimator for digital curves.

    PubMed

    Prasad, Dilip K; Leung, Maylor K H; Quek, Chai; Brown, Michael S

    2014-10-01

    We propose a simple and fast method for tangent estimation of digital curves. This geometric-based method uses a small local region for tangent estimation and has a definite upper bound error for continuous as well as digital conics, i.e., circles, ellipses, parabolas, and hyperbolas. Explicit expressions of the upper bounds for continuous and digitized curves are derived, which can also be applied to nonconic curves. Our approach is benchmarked against 72 contemporary tangent estimation methods and demonstrates good performance for conic, nonconic, and noisy curves. In addition, we demonstrate a good multigrid and isotropic performance and low computational complexity of O(1) and better performance than most methods in terms of maximum and average errors in tangent computation for a large variety of digital curves.

  12. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  13. Climate spectrum estimation in the presence of timescale errors

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Scholz, D.; Röthlisberger, R.; Fleitmann, D.; Mangini, A.; Wolff, E. W.

    2009-02-01

    We introduce an algorithm (called REDFITmc2) for spectrum estimation in the presence of timescale errors. It is based on the Lomb-Scargle periodogram for unevenly spaced time series, in combination with the Welch's Overlapped Segment Averaging procedure, bootstrap bias correction and persistence estimation. The timescale errors are modelled parametrically and included in the simulations for determining (1) the upper levels of the spectrum of the red-noise AR(1) alternative and (2) the uncertainty of the frequency of a spectral peak. Application of REDFITmc2 to ice core and stalagmite records of palaeoclimate allowed a more realistic evaluation of spectral peaks than when ignoring this source of uncertainty. The results support qualitatively the intuition that stronger effects on the spectrum estimate (decreased detectability and increased frequency uncertainty) occur for higher frequencies. The surplus information brought by algorithm REDFITmc2 is that those effects are quantified. Regarding timescale construction, not only the fixpoints, dating errors and the functional form of the age-depth model play a role. Also the joint distribution of all time points (serial correlation, stratigraphic order) determines spectrum estimation.

  14. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  15. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  16. Divergent estimation error in portfolio optimization and in linear regression

    NASA Astrophysics Data System (ADS)

    Kondor, I.; Varga-Haszonits, I.

    2008-08-01

    The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

  17. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  18. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  19. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  20. Error Covariance Estimation and Representation for Mesoscale Data Assimilation

    DTIC Science & Technology

    2003-09-30

    Error Covariance Estimation and Representation for Mesoscale Data Assimilation Dr. Qin Xu CIMMS , University of Oklahoma 100 E. Boyd (Rm 1110...calculations are performed by project-supported research scientists at CIMMS , the University of Oklahoma. The required innovation data are collected by project...AND ADDRESS(ES) CIMMS , University of Oklahoma,,100 E. Boyd (Rm 1110),,Norman,,OK,73019 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING

  1. Error Covariance Estimation and Representation for Mesoscale Data Assimilation

    DTIC Science & Technology

    2005-09-30

    Error Covariance Estimation and Representation for Mesoscale Data Assimilation Dr. Qin Xu CIMMS , University of Oklahoma, 100 E. Boyd (Rm 1110...by project-supported research scientists at CIMMS , the University of Oklahoma. The required innovation data were collected by Drs. Edward Barker and...AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) CIMMS , University of Oklahoma

  2. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons

    PubMed Central

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2012-01-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π. PMID:24027379

  3. An Analysis of Estimating Errors on Government Contracts.

    DTIC Science & Technology

    2014-09-26

    of the cost of each project, using the same plans and specifications as the bidders. This estimate is open at the same time the rest of the bids are...significantly reduces the variability of estimating error. Time was used by setting I January 1982 as zero time . Thus, a data value of 1.2 represents 14 March...22 15 9 46 SWD 4.11 3.50 2.95 8.75 35 43 15 93 - 10 - the data set. These projects are 20-40 times the magnitude of the mean project size and

  4. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  5. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model

    PubMed Central

    Zollanvari, Amin; Dougherty, Edward R.

    2014-01-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures. PMID:24729636

  6. Model error estimation and correction by solving a inverse problem

    NASA Astrophysics Data System (ADS)

    Xue, Haile

    2016-04-01

    Nowadays, the weather forecasts and climate predictions are increasingly relied on numerical models. Yet, errors inevitably exist in model due to the imperfect numeric and parameterizations. From the practical point of view, model correction is an efficient strategy. Despite of the different complexity of forecast error correction algorithms, the general idea is to estimate the forecast errors by considering the NWP as a direct problem. Chou (1974) suggested an alternative view by considering the NWP as an inverse problem. The model error tendency term (ME) due to the model deficiency is assumed as an unknown term in NWP model, which can be discretized into short intervals (for example 6 hour) and considered as a constant or linear form in each interval. Given the past re-analyses and NWP model, the discretized MEs in the past intervals can be solved iteratively as a constant or linear-increased tendency term in each interval. These MEs can be further used as the online corrections. In this study, an iterative method for obtaining the MEs in past intervals was presented, and its convergence had been confirmed with sets of experiments in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August (JA) 2009 and January-February (JF) 2010. Then these MEs were used to get online model corretions based of systematic errors of GRAPES-GFS for July 2009 and January 2010. The data sets associated with initial condition and sea surface temperature (SST) used in this study are both based on NCEP final (FNL) data. According to the iterative numerical experiments, the following key conclusions can be drawn:(1) Batches of iteration test results indicated that the hour 6 forecast errors were reduced to 10% of their original value after 20 steps of iteration.(2) By offlinely comparing the error corrections estimated by MEs to the mean forecast errors, the patterns of estimated errors were considered to agree well with those

  7. Ontology based log content extraction engine for a posteriori security control.

    PubMed

    Azkia, Hanieh; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Coatrieux, Gouenou

    2012-01-01

    In a posteriori access control, users are accountable for actions they performed and must provide evidence, when required by some legal authorities for instance, to prove that these actions were legitimate. Generally, log files contain the needed data to achieve this goal. This logged data can be recorded in several formats; we consider here IHE-ATNA (Integrating the healthcare enterprise-Audit Trail and Node Authentication) as log format. The difficulty lies in extracting useful information regardless of the log format. A posteriori access control frameworks often include a log filtering engine that provides this extraction function. In this paper we define and enforce this function by building an IHE-ATNA based ontology model, which we query using SPARQL, and show how the a posteriori security controls are made effective and easier based on this function.

  8. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    PubMed

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-06-11

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  9. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  10. CADNA: a library for estimating round-off error propagation

    NASA Astrophysics Data System (ADS)

    Jézéquel, Fabienne; Chesneaux, Jean-Marie

    2008-06-01

    The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each

  11. A posteriori information effects on culpability judgments from a cross-cultural perspective.

    PubMed

    Wan, Wendy W N; Chiu, Chi-Yue; Luk, Chung-Leung

    2005-10-01

    A posteriori information about the moral attributes of the victim of a crime can affect an observer's judgment on the culpability of the actor of the crime so that negative moral attributes of the victim will lead to a lower judgment of culpability. The authors found this effect of a posteriori information among 118 American and 123 Chinese participants, but the underlying mechanisms were different between the two cultural groups. The Americans considered the psychological state of the actor during the crime, whereas the Chinese considered the morality of the actor during the crime. The authors discussed these results in light of the respondents' implicit theories of morality.

  12. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  13. Close-range radar rainfall estimation and error analysis

    NASA Astrophysics Data System (ADS)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2016-08-01

    Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1-2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data. Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific Z-R relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall-Palmer Z-R relation. Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge

  14. Segmenting pectoralis muscle on digital mammograms by a Markov random field-maximum a posteriori model

    PubMed Central

    Ge, Mei; Mainprize, James G.; Mawdsley, Gordon E.; Yaffe, Martin J.

    2014-01-01

    Abstract. Accurate and automatic segmentation of the pectoralis muscle is essential in many breast image processing procedures, for example, in the computation of volumetric breast density from digital mammograms. Its segmentation is a difficult task due to the heterogeneity of the region, neighborhood complexities, and shape variability. The segmentation is achieved by pixel classification through a Markov random field (MRF) image model. Using the image intensity feature as observable data and local spatial information as a priori, the posterior distribution is estimated in a stochastic process. With a variable potential component in the energy function, by the maximum a posteriori (MAP) estimate of the labeling image, given the image intensity feature which is assumed to follow a Gaussian distribution, we achieved convergence properties in an appropriate sense by Metropolis sampling the posterior distribution of the selected energy function. By proposing an adjustable spatial constraint, the MRF-MAP model is able to embody the shape requirement and provide the required flexibility for the model parameter fitting process. We demonstrate that accurate and robust segmentation can be achieved for the curving-triangle-shaped pectoralis muscle in the medio-lateral-oblique (MLO) view, and the semielliptic-shaped muscle in cranio-caudal (CC) view digital mammograms. The applicable mammograms can be either “For Processing” or “For Presentation” image formats. The algorithm was developed using 56 MLO-view and 79 CC-view FFDM “For Processing” images, and quantitatively evaluated against a random selection of 122 MLO-view and 173 CC-view FFDM images of both presentation intent types. PMID:26158068

  15. Error estimation and adaptivity for transport problems with uncertain parameters

    NASA Astrophysics Data System (ADS)

    Sahni, Onkar; Li, Jason; Oberai, Assad

    2016-11-01

    Stochastic partial differential equations (PDEs) with uncertain parameters and source terms arise in many transport problems. In this study, we develop and apply an adaptive approach based on the variational multiscale (VMS) formulation for discretizing stochastic PDEs. In this approach we employ finite elements in the physical domain and generalize polynomial chaos based spectral basis in the stochastic domain. We demonstrate our approach on non-trivial transport problems where the uncertain parameters are such that the advective and diffusive regimes are spanned in the stochastic domain. We show that the proposed method is effective as a local error estimator in quantifying the element-wise error and in driving adaptivity in the physical and stochastic domains. We will also indicate how this approach may be extended to the Navier-Stokes equations. NSF Award 1350454 (CAREER).

  16. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  17. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  18. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  19. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  20. Comparative analysis of a-priori and a-posteriori dietary patterns using state-of-the-art classification algorithms: a case/case-control study.

    PubMed

    Kastorini, Christina-Maria; Papadakis, George; Milionis, Haralampos J; Kalantzi, Kallirroi; Puddu, Paolo-Emilio; Nikolaou, Vassilios; Vemmos, Konstantinos N; Goudevenos, John A; Panagiotakos, Demosthenes B

    2013-11-01

    To compare the accuracy of a-priori and a-posteriori dietary patterns in the prediction of acute coronary syndrome (ACS) and ischemic stroke. This is actually the first study to employ state-of-the-art classification methods for this purpose. During 2009-2010, 1000 participants were enrolled; 250 consecutive patients with a first ACS and 250 controls (60±12 years, 83% males), as well as 250 consecutive patients with a first stroke and 250 controls (75±9 years, 56% males). The controls were population-based and age-sex matched to the patients. The a-priori dietary patterns were derived from the validated MedDietScore, whereas the a-posteriori ones were extracted from principal components analysis. Both approaches were modeled using six classification algorithms: multiple logistic regression (MLR), naïve Bayes, decision trees, repeated incremental pruning to produce error reduction (RIPPER), artificial neural networks and support vector machines. The classification accuracy of the resulting models was evaluated using the C-statistic. For the ACS prediction, the C-statistic varied from 0.587 (RIPPER) to 0.807 (MLR) for the a-priori analysis, while for the a-posteriori one, it fluctuated between 0.583 (RIPPER) and 0.827 (MLR). For the stroke prediction, the C-statistic varied from 0.637 (RIPPER) to 0.767 (MLR) for the a-priori analysis, and from 0.617 (decision tree) to 0.780 (MLR) for the a-posteriori. Both dietary pattern approaches achieved equivalent classification accuracy over most classification algorithms. The choice, therefore, depends on the application at hand. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Verification of unfold error estimates in the UFO code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  2. Bootstrap Standard Error Estimates in Dynamic Factor Analysis.

    PubMed

    Zhang, Guangjian; Browne, Michael W

    2010-05-28

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the interdependence of successive observations. Bootstrap methods can fill this need, however. The standard bootstrap of individual timepoints is not appropriate because it destroys their order in time and consequently gives incorrect standard error estimates. Two bootstrap procedures that are appropriate for dynamic factor analysis are described. The moving block bootstrap breaks down the original time series into blocks and draws samples of blocks instead of individual timepoints. A parametric bootstrap is essentially a Monte Carlo study in which the population parameters are taken to be estimates obtained from the available sample. These bootstrap procedures are demonstrated using 103 days of affective mood self-ratings from a pregnant woman, 90 days of personality self-ratings from a psychology freshman, and a simulation study.

  3. Erasing Errors due to Alignment Ambiguity When Estimating Positive Selection

    PubMed Central

    Redelings, Benjamin

    2014-01-01

    Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments. PMID:24866534

  4. Error Rate Estimation in Quantum Key Distribution with Finite Resources

    NASA Astrophysics Data System (ADS)

    Lu, Zhao; Shi, Jian-Hong; Li, Feng-Guang

    2017-04-01

    The goal of quantum key distribution (QKD) is to generate secret key shared between two distant players, Alice and Bob. We present the connection between sampling rate and erroneous judgment probability when estimating error rate with random sampling method, and propose a method to compute optimal sampling rate, which can maximize final secure key generation rate. These results can be applied to choose the optimal sampling rate and improve the performance of QKD system with finite resources. Supported by the National Natural Science Foundation of China under Grant Nos. U1304613 and 11204379

  5. Algorithm for correcting the keratometric estimation error in normal eyes..

    PubMed

    Camps, Vicente J; Pinero Llorens, David P; de Fez, Dolores; Coloma, Pilar; Caballero, Maria Teresa; Garcia, Celia; Miret, Juan J

    2012-02-01

    To obtain an accurate algorithm for calculating the keratometric index that minimizes the errors in the calculation of corneal power assuming only a single corneal surface in the range of corneal curvatures of the normal population. Corneal power was calculated by using the classical keratometric index and also by using the Gaussian equation. Differences between types of calculation of corneal power were determined and modeled by regression analysis. We proposed two options for the selection of the most appropriate keratometric index (n(k)) value for each specific case. First was the use of specific linear equations (depending on the ratio of the anterior to the posterior curvature, k ratio) according to the value of the central radius of curvature of the anterior corneal surface (r(1c)) in 0.1 mm steps and the theoretical eye model considered. The second was the use of a general simplified equation only requiring r(1c) (Gullstrand eye model, n(k) = -0.0064286r(1c) + 1.37688; Le Grand eye model, n(k) = -0.0063804r(1c) + 1.37806). The generalization of the keratometric index (n(k)) value is not an appropriate approximation for the estimation of the corneal power and it can lead to significant errors. We proposed a new algorithm depending on r(1c), with a maximal associated error in the calculation of the corneal power of 0.5 D and without requiring knowledge of the posterior corneal curvature.

  6. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  7. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2013-06-24

    Algorithm 1: Iterative multirate Galerkin finite element method for n = 1 to N do Sety (0,=V(M„_,,(t-_i) for m = 1 to M„ do setv(""(tn--,) = y (M-»(tn--i...1 to N do Sety !,1 (0) . ,tf- ’(r„- for m = 1 to M„ do Compute y’,m,(0 for t € /„ satisfying ■F.(yM ,(m) „("i-in y\\m) ■■ yy1(t„-,)=y;,"-"(tn_l...over one time step. We sety ^’, =y„-, in Lemma 4.2 and combine it with Lemma 4.1 to get KM"’",^) = (*H,<?„-",) +Q,,n+Q2,n + Q3,n + Q4,n + (yn.u^f

  8. A Posteriori Error Bounds for Two-Point Boundary Value Problems.

    DTIC Science & Technology

    1978-01-01

    numerical solutions were obtained by a collocation method. The subroutine LOBATO by deBoor and Weiss [4] was used to obtain the numerical solutions...subroutine LOBATO at 101 equally spaced points. We computed with = 7 and 01 = 1. Note that ~ 2 - = 48. The bounds we obtained are: K B h 4 2.0l54x10~~ l

  9. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    terajoules newton (N) kilo pascal (kPa) newton -second/m 2 (N-s/m 2 ) meter (m) meter (m) meter (m) kilogram (kg) newton (N) newton -meter...N-m) newton /meter (N/m) kilo pascal (kPa) kilo pascal (kPa) kilogram (kg) kilogram-meter 2 (kg-m2) kilogram-meter 3 (kg/m 3 ) **Gray (Gy...or by measurement. This work is nearing completion and will be submitted in Summer 2012 (with W. Newton ) Analysis of elliptic problems on domains

  10. Models and error analyses in urban air quality estimation

    NASA Technical Reports Server (NTRS)

    Englar, T., Jr.; Diamante, J. M.; Jazwinski, A. H.

    1976-01-01

    Estimation theory has been applied to a wide range of aerospace problems. Application of this expertise outside the aerospace field has been extremely limited, however. This paper describes the use of covariance error analysis techniques in evaluating the accuracy of pollution estimates obtained from a variety of concentration measuring devices. It is shown how existing software developed for aerospace applications can be applied to the estimation of pollution through the processing of measurement types involving a range of spatial and temporal responses. The modeling of pollutant concentration by meandering Gaussian plumes is described in some detail. Time averaged measurements are associated with a model of the average plume, using some of the same state parameters and thus avoiding the problem of state memory. The covariance analysis has been implemented using existing batch estimation software. This usually involves problems in handling dynamic noise; however, the white dynamic noise has been replaced by a band-limited process which can be easily accommodated by the software.

  11. Models and error analyses in urban air quality estimation

    NASA Technical Reports Server (NTRS)

    Englar, T., Jr.; Diamante, J. M.; Jazwinski, A. H.

    1976-01-01

    Estimation theory has been applied to a wide range of aerospace problems. Application of this expertise outside the aerospace field has been extremely limited, however. This paper describes the use of covariance error analysis techniques in evaluating the accuracy of pollution estimates obtained from a variety of concentration measuring devices. It is shown how existing software developed for aerospace applications can be applied to the estimation of pollution through the processing of measurement types involving a range of spatial and temporal responses. The modeling of pollutant concentration by meandering Gaussian plumes is described in some detail. Time averaged measurements are associated with a model of the average plume, using some of the same state parameters and thus avoiding the problem of state memory. The covariance analysis has been implemented using existing batch estimation software. This usually involves problems in handling dynamic noise; however, the white dynamic noise has been replaced by a band-limited process which can be easily accommodated by the software.

  12. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  13. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  14. CO2 flux estimation errors associated with moist atmospheric processes

    NASA Astrophysics Data System (ADS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-04-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between moist transport, satellite CO2 retrievals, and source/sink inversion has not yet been established. Here we examine the effect of moist processes on (1) synoptic CO2 transport by Version-4 and Version-5 NASA Goddard Earth Observing System Data Assimilation System (NASA-DAS) meteorological analyses, and (2) source/sink inversion. We find that synoptic transport processes, such as fronts and dry/moist conveyors, feed off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to continental scale source/sink estimation errors of up to 0.25 PgC yr-1 in northern mid-latitudes. Second, moist processes are represented differently in GEOS-4 and GEOS-5, leading to differences in vertical CO2 gradients, moist poleward and dry equatorward CO2 transport, and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified, causing source/sink estimation errors of up to 0.55 PgC yr-1 in northern mid-latitudes. These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  15. Error estimation for CFD aeroheating prediction under rarefied flow condition

    NASA Astrophysics Data System (ADS)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  16. Transition State Theory: Variational Formulation, Dynamical Corrections, and Error Estimates

    NASA Astrophysics Data System (ADS)

    vanden-Eijnden, Eric

    2009-03-01

    Transition state theory (TST) is discussed from an original viewpoint: it is shown how to compute exactly the mean frequency of transition between two predefined sets which either partition phase space (as in TST) or are taken to be well separate metastable sets corresponding to long-lived conformation states (as necessary to obtain the actual transition rate constants between these states). Exact and approximate criterions for the optimal TST dividing surface with minimum recrossing rate are derived. Some issues about the definition and meaning of the free energy in the context of TST are also discussed. Finally precise error estimates for the numerical procedure to evaluate the transmission coefficient κS of the TST dividing surface are given, and it shown that the relative error on κS scales as 1/√κS when κS is small. This implies that dynamical corrections to the TST rate constant can be computed efficiently if and only if the TST dividing surface has a transmission coefficient κS which is not too small. In particular the TST dividing surface must be optimized upon (for otherwise κS is generally very small), but this may not be sufficient to make the procedure numerically efficient (because the optimal dividing surface has maximum κS, but this coefficient may still be very small).

  17. Sampling Errors in Rainfall Estimates by Multiple Satellites.

    NASA Astrophysics Data System (ADS)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert

    1993-02-01

    This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the space time spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.

  18. Sampling errors in rainfall estimates by multiple satellites

    NASA Technical Reports Server (NTRS)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert

    1993-01-01

    This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.

  19. Sampling errors in rainfall estimates by multiple satellites

    NASA Technical Reports Server (NTRS)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert

    1993-01-01

    This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.

  20. An estimate of error for the CCAMLR 2000 survey estimate of krill biomass

    NASA Astrophysics Data System (ADS)

    Demer, David A.

    2004-06-01

    Combined sampling and measurement error is estimated for the CCAMLR 2000 acoustic estimate of krill abundance in the Scotia Sea. First, some potential sources of uncertainty in generic echo-integration surveys are reviewed. Then, specific to the CCAMLR 2000 survey, some of the primary sources of measurement error is explored. The error in system calibration is evaluated in relation to the effects of variations in water temperature and salinity on sound speed, sound absorption, and acoustic-beam characteristics. Variation in krill target strength is estimated using a distorted-wave Born approximation model fitted with measured distributions of animal lengths and orientations. The variable effectiveness of two-frequency species classification methods is also investigated using the same scattering model. Most of these components of measurement uncertainty are frequency-dependent and covariant. Ultimately, the total random error in the CCAMLR 2000 acoustic estimate of krill abundance is estimated from a Monte Carlo simulation which assumes independent estimates of krill biomass are derived from acoustic backscatter measurements at three frequencies (38, 120, and 200 kHz). The overall coefficient of variation ( 10.2⩽CV⩽11.6%; 95% CI) is not significantly different from the sampling variance alone (CV=11.4%). That is, the measurement variance is negligible relative to the sampling variance due to the large number of measurements averaged to derive the ultimate biomass estimate. Some potential sources of bias (e.g., stemming from uncertainties in the target strength model, the krill length-to-weight model, the species classification method, bubble attenuation, signal thresholding, and survey area definition) may be more appreciable components of measurement uncertainty.

  1. Estimator reduction and convergence of adaptive BEM.

    PubMed

    Aurada, Markus; Ferraz-Leite, Samuel; Praetorius, Dirk

    2012-06-01

    A posteriori error estimation and related adaptive mesh-refining algorithms have themselves proven to be powerful tools in nowadays scientific computing. Contrary to adaptive finite element methods, convergence of adaptive boundary element schemes is, however, widely open. We propose a relaxed notion of convergence of adaptive boundary element schemes. Instead of asking for convergence of the error to zero, we only aim to prove estimator convergence in the sense that the adaptive algorithm drives the underlying error estimator to zero. We observe that certain error estimators satisfy an estimator reduction property which is sufficient for estimator convergence. The elementary analysis is only based on Dörfler marking and inverse estimates, but not on reliability and efficiency of the error estimator at hand. In particular, our approach gives a first mathematical justification for the proposed steering of anisotropic mesh-refinements, which is mandatory for optimal convergence behavior in 3D boundary element computations.

  2. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  3. Detecting Positioning Errors and Estimating Correct Positions by Moving Window.

    PubMed

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research.

  4. Detecting Positioning Errors and Estimating Correct Positions by Moving Window

    PubMed Central

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282

  5. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  6. [Experience with using mathematic model for evaluation of a posteriori occupational risk].

    PubMed

    Piktushanskaia, T E

    2009-01-01

    The author analyzed changes in occupational morbidity among workers of leading economic branches of Russian Federation, gave prognosis of occupational morbidity level for recent and distant future. The morbidity level is characterized by reliable decreasing trend--that is due to long decline in diagnostic rate of occupational diseases in periodic medical examinations. The author specified mathematic model to evaluate a posteriori occupational risk, based on materials concerning periodic medical examinations in coal miners.

  7. CO2 flux estimation errors associated with moist atmospheric processes

    NASA Astrophysics Data System (ADS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-07-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43 ± 0.35 PgC yr-1). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  8. CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

    NASA Technical Reports Server (NTRS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-01-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  9. Transition state theory: Variational formulation, dynamical corrections, and error estimates

    NASA Astrophysics Data System (ADS)

    Vanden-Eijnden, Eric; Tal, Fabio A.

    2005-11-01

    Transition state theory (TST) is revisited, as well as evolutions upon TST such as variational TST in which the TST dividing surface is optimized so as to minimize the rate of recrossing through this surface and methods which aim at computing dynamical corrections to the TST transition rate constant. The theory is discussed from an original viewpoint. It is shown how to compute exactly the mean frequency of transition between two predefined sets which either partition phase space (as in TST) or are taken to be well-separated metastable sets corresponding to long-lived conformation states (as necessary to obtain the actual transition rate constants between these states). Exact and approximate criterions for the optimal TST dividing surface with minimum recrossing rate are derived. Some issues about the definition and meaning of the free energy in the context of TST are also discussed. Finally precise error estimates for the numerical procedure to evaluate the transmission coefficient κS of the TST dividing surface are given, and it is shown that the relative error on κS scales as 1/√κS when κS is small. This implies that dynamical corrections to the TST rate constant can be computed efficiently if and only if the TST dividing surface has a transmission coefficient κS which is not too small. In particular, the TST dividing surface must be optimized upon (for otherwise κS is generally very small), but this may not be sufficient to make the procedure numerically efficient (because the optimal dividing surface has maximum κS, but this coefficient may still be very small).

  10. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  11. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  12. Adjoint-Based Forecast Error Sensitivity Diagnostics in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Langland, R.; Daescu, D.

    2016-12-01

    We present an up-to-date review of the adjoint-data assimilation system (DAS) approach to evaluate the forecast sensitivity to error covariance parameters and provide guidance to flow-dependent adaptive covariance tuning (ACT) procedures. New applications of the forecast sensitivity to observation error covariance (FSR) are investigated including the sensitivity to observation error correlations and a priori first-order assessment to the error correlation impact on the forecasts. Issues related to ambiguities in the a posteriori estimation to the observation error covariance (R) and background error covariance (B) are discussed. A synergistic framework to adaptive covariance tuning is considered that combines R-estimates derived from a posteriori covariance diagnosis and FSR derivative information. The evaluation of the forecast sensitivity to the innovation-weight coefficients is introduced as a computationally-feasible approach to account for the characteristics of both R- and B-parameters and perform direct tuning of the DAS gain operator (K). Theoretical aspects are discussed and recent results are provided with the adjoint versions of the Naval Research Laboratory Atmospheric Variational Data Assimilation System-Accelerated Representer (NAVDAS-AR).

  13. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  14. Evaluating concentration estimation errors in ELISA microarray experiments

    SciTech Connect

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

    2005-01-26

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

  15. Estimation of the linear relationship between the measurements of two methods with proportional errors.

    PubMed

    Linnet, K

    1990-12-01

    The linear relationship between the measurements of two methods is estimated on the basis of a weighted errors-in-variables regression model that takes into account a proportional relationship between standard deviations of error distributions and true variable levels. Weights are estimated by an interative procedure. As shown by simulations, the regression procedure yields practically unbiased slope estimates in realistic situations. Standard errors of slope and location difference estimations are derived by the jackknife principle. For illustration, the linear relationship is estimated between the measurements of two albumin methods with proportional errors.

  16. Calibration of remotely sensed proportion or area estimates for misclassification error

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts

    1992-01-01

    Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...

  17. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  18. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point

  19. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  20. Estimation of projection errors in human ocular fundus imaging.

    PubMed

    Doelemeyer, A; Petrig, B L

    2000-03-01

    Photogrammetric analysis of features in human ocular fundus images is affected by various sources of errors, for example aberrations of the camera and eye optics. Another--usually disregarded--type of distortion arises from projecting the near spherical shape of the fundus onto a planar imaging device. In this paper we quantify such projection errors based on geometrical analysis of the reduced model eye imaged by a pinhole camera. The projection error found near the edge of a 50 degrees fundus image is 24%. In addition, the influence of axial ametropia is investigated for both myopia and hyperopia. The projection errors found in hyperopia are similar to those in emmetropia, but decrease in myopia. Spherical as well as ellipsoidal eye shapes were used in the above calculation and their effect was compared. Our results suggest that the simple spherical eye shape is sufficient for correcting projection distortions within a range of ametropia from -5 to +5 diopters.

  1. Aerial measurement error with a dot planimeter: Some experimental estimates

    NASA Technical Reports Server (NTRS)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  2. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects.

    PubMed

    Weller, J I; VanRaden, P M; Wiggans, G R

    2013-08-01

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 [~50,000 single nucleotide polymorphisms (SNP); Illumina Inc., San Diego, CA] genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with ≥100 genotyped sons with genetic evaluations based on progeny tests. For 33 traits (milk, fat, and protein yields; fat and protein percentages; somatic cell score; productive life; daughter pregnancy rate; heifer and cow conception rates; service-sire and daughter calving ease; service-sire and daughter stillbirth; 18 conformation traits; and net merit), the analysis was applied to the autosomal segment with the SNP with the greatest effect in the genomic evaluation of each trait. All traits except 2 had a within-family haplotype effect. The same design was applied with the genetic evaluations of sons corrected for SNP effects associated with chromosomes besides the one under analysis. The number of within-family contrasts was 166 without adjustment and 211 with adjustment. Of the 52 bulls analyzed, 36 had BovineHD (high density; Illumina Inc.) genotypes that were used to test for concordance between sire quantitative trait loci and SNP genotypes; complete concordance was not obtained for any effects. Of the 31 traits with effects from the a posteriori granddaughter design, 21 were analyzed with the modified granddaughter design. Only sires with a contrast for the a posteriori granddaughter design and ≥200 granddaughters with a record usable for genetic evaluation were included. Calving traits could not be analyzed because individual cow evaluations were not computed. Eight traits had within-family haplotype effects. With respect to milk and fat yields and fat percentage, the results on Bos taurus autosome (BTA) 14 corresponded to the hypothesis that a missense mutation in the diacylglycerol O-acyltransferase 1 (DGAT1) gene is the main causative mutation

  3. Residential electricity load decomposition method based on maximum a posteriori probability

    NASA Astrophysics Data System (ADS)

    Shan, Guangpu; Zhou, Heng; Liu, Song; Liu, Peng

    2017-05-01

    In order to improvement problems that the computational complexity and the accuracy is not high in load decomposition, a load decomposition method based on the maximum a posteriori probability is proposed, the electrical equipment steady-state current is chosen as load characteristic, according to the Bayesian formula, all the electric equipment's' electricity information value can be acquired at a time exactly. Experimental results show that the method can identify the running state of each power equipment, and can get a higher decomposition accuracy. In addition, the data used can be collected by the common smart meters that can be directly got from the current market, reducing the cost of hardware input.

  4. Gas hydrate estimation error associated with uncertainties of measurements and parameters

    USGS Publications Warehouse

    Lee, Myung W.; Collett, Timothy S.

    2001-01-01

    Downhole log measurements such as acoustic or electrical resistivity logs are often used to estimate in situ gas hydrate concentrations in sediment pore space. Estimation errors owing to uncertainties associated with downhole measurements and the parameters for estimation equations (weight in the acoustic method and Archie?s parameters in the resistivity method) are analyzed in order to assess the accuracy of estimation of gas hydrate concentration. Accurate downhole measurements are essential for accurate estimation of the gas hydrate concentrations in sediments, particularly at low gas hydrate concentrations and when using acoustic data. Estimation errors owing to measurement errors, except the slowness error, decrease as the gas hydrate concentration increases and as porosity increases. Estimation errors owing to uncertainty in the input parameters are small in the acoustic method and may be signifi cant in the resistivity method at low gas hydrate concentrations.

  5. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  6. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  7. Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error

    NASA Astrophysics Data System (ADS)

    Tang, H.; Dubayah, R.

    2010-12-01

    Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).

  8. Estimation of error rates in classification of distorted imagery.

    PubMed

    Lahart, M J

    1984-04-01

    This correspondence considers the problem of matching image data to a large library of objects when the image is distorted. Two types of distortions are considered: blur-type, in which a transfer function is applied to Fourier components of the image, and scale-type, in which each Fourier component is mapped into another. The objects of the library are assumed to be normally distributed in an appropriate feature space. Approximate expressions are developed for classification error rates as a function of noise. The error rates they predict are compared with those from classification of artificial data, generated by a Gaussian random number generator, and with error rates from classification of actual data. It is demonstrated that, for classification purposes, distortions can be characterized by a small number of parameters.

  9. Simple a posteriori slope limiter (Post Limiter) for high resolution and efficient flow computations

    NASA Astrophysics Data System (ADS)

    Kitamura, Keiichi; Hashimoto, Atsushi

    2017-07-01

    A simple and efficient a posteriori slope limiter (;Post Limiter;) is proposed for compressible Navier-Stokes and Euler equations, and examined in 1D and 2D. The Post Limiter tries to employ un-limited solutions where and when possible (even at shocks), and blend the un-limited and (1st-order) limited solutions smoothly, leading to equivalently four times resolution in 1D. This idea was inspired by a posteriori limiting approaches originally developed by Clain et al. (2011) [18] for higher-order flow computations, but proposed here is an alternative suitable and simplified for 2nd-order spatial accuracy with improved both solution and convergence. In fact, any iteration processes are no longer required to determine optimal orders of accuracy, since the limited and un-limited values are available at one time at 2nd-order. In 2D, several numerical examples have been dealt with, and both the κ = 1 / 3 MUSCL (in a structured solver) and Green-Gauss (in an unstructured solver) reconstructions demonstrated resolution improvement (nearly 4 × 4 times), convergence acceleration, and removal of numerical noises. Even on triangular meshes (on which least-squares reconstruction is used), the unstructured solver showed the improved solutions if cell geometries (cell-orientation angles) are properly taken into account. Therefore, the Post Limiter is readily incorporated into existing codes.

  10. Phylogenomics and a posteriori data partitioning resolve the Cretaceous angiosperm radiation Malpighiales.

    PubMed

    Xi, Zhenxiang; Ruhfel, Brad R; Schaefer, Hanno; Amorim, André M; Sugumaran, M; Wurdack, Kenneth J; Endress, Peter K; Matthews, Merran L; Stevens, Peter F; Mathews, Sarah; Davis, Charles C

    2012-10-23

    The angiosperm order Malpighiales includes ~16,000 species and constitutes up to 40% of the understory tree diversity in tropical rain forests. Despite remarkable progress in angiosperm systematics during the last 20 y, relationships within Malpighiales remain poorly resolved, possibly owing to its rapid rise during the mid-Cretaceous. Using phylogenomic approaches, including analyses of 82 plastid genes from 58 species, we identified 12 additional clades in Malpighiales and substantially increased resolution along the backbone. This greatly improved phylogeny revealed a dynamic history of shifts in net diversification rates across Malpighiales, with bursts of diversification noted in the Barbados cherries (Malpighiaceae), cocas (Erythroxylaceae), and passion flowers (Passifloraceae). We found that commonly used a priori approaches for partitioning concatenated data in maximum likelihood analyses, by gene or by codon position, performed poorly relative to the use of partitions identified a posteriori using a Bayesian mixture model. We also found better branch support in trees inferred from a taxon-rich, data-sparse matrix, which deeply sampled only the phylogenetically critical placeholders, than in trees inferred from a taxon-sparse matrix with little missing data. Although this matrix has more missing data, our a posteriori partitioning strategy reduced the possibility of producing multiple distinct but equally optimal topologies and increased phylogenetic decisiveness, compared with the strategy of partitioning by gene. These approaches are likely to help improve phylogenetic resolution in other poorly resolved major clades of angiosperms and to be more broadly useful in studies across the Tree of Life.

  11. Quantitative evaluation of efficiency of the methods for a posteriori filtration of the slip-rate time histories

    NASA Astrophysics Data System (ADS)

    Kristekova, M.; Galis, M.; Moczo, P.; Kristek, J.

    2012-04-01

    Simulated slip-rate time histories often are not free from spurious high-frequency oscillations. This is because the used spatial grid is not fine enough to properly discretize possibly broad-spectrum slip-rate and stress variations and the spatial breakdown zone of the propagating rupture. In order to reduce the oscillations some numerical modelers apply the artificial damping. An alternative way is the application of the adaptive smoothing algorithm (ASA, Galis et al. 2010). The other modelers, however, rely on the a posteriori filtration. If the oscillations do not affect (change) development and propagation of the rupture during simulations, it is possible to apply a posteriori filtration to reduce the oscillations. Often, however, the a posteriori filtration is a problematic trade-off between suppression of oscillations and distortion of a true slip rate. We present quantitative comparison of efficiency of several methods. We have analyzed slip-rate time histories simulated by the FEM-TSN method. Signals containing spurious high-frequency oscillations and signals after application of a posteriori filtering have been compared to the reference signal. The reference signal was created by application of a careful iterative and adjusted denoising of the slip rate simulated using the finest (technically possible) spatial grid. We performed extensive numerical simulations in order to test efficiency of a posteriori filtration for slip rates with different level and nature of spurious oscillations. We show that the time-frequency analysis and time-frequency misfit criteria (Kristekova et al. 2006, 2009) are suitable tools for evaluation of efficiency of a posteriori filtration methods and also clear indicators of possible distortions introduced by a posteriori filtration.

  12. Pedigree error due to extra-pair reproduction substantially biases estimates of inbreeding depression.

    PubMed

    Reid, Jane M; Keller, Lukas F; Marr, Amy B; Nietlisbach, Pirmin; Sardell, Rebecca J; Arcese, Peter

    2014-03-01

    Understanding the evolutionary dynamics of inbreeding and inbreeding depression requires unbiased estimation of inbreeding depression across diverse mating systems. However, studies estimating inbreeding depression often measure inbreeding with error, for example, based on pedigree data derived from observed parental behavior that ignore paternity error stemming from multiple mating. Such paternity error causes error in estimated coefficients of inbreeding (f) and reproductive success and could bias estimates of inbreeding depression. We used complete "apparent" pedigree data compiled from observed parental behavior and analogous "actual" pedigree data comprising genetic parentage to quantify effects of paternity error stemming from extra-pair reproduction on estimates of f, reproductive success, and inbreeding depression in free-living song sparrows (Melospiza melodia). Paternity error caused widespread error in estimates of f and male reproductive success, causing inbreeding depression in male and female annual and lifetime reproductive success and juvenile male survival to be substantially underestimated. Conversely, inbreeding depression in adult male survival tended to be overestimated when paternity error was ignored. Pedigree error stemming from extra-pair reproduction therefore caused substantial and divergent bias in estimates of inbreeding depression that could bias tests of evolutionary theories regarding inbreeding and inbreeding depression and their links to variation in mating system. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.

  13. The effect of retrospective sampling on estimates of prediction error for multifactor dimensionality reduction.

    PubMed

    Winham, Stacey J; Motsinger-Reif, Alison A

    2011-01-01

    The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.

  14. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  15. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    PubMed Central

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám

    2016-01-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations. PMID:27493566

  16. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    PubMed

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  17. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Treesearch

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens. Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  18. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  19. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  20. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  1. Triple collocation: beyond three estimates and separation of structural/non-structural errors

    USDA-ARS?s Scientific Manuscript database

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the “multiple” collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is s...

  2. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    PubMed

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  3. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  4. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  5. The coefficient of error of optical fractionator population size estimates: a computer simulation comparing three estimators.

    PubMed

    Glaser, E M; Wilson, P D

    1998-11-01

    The optical fractionator is a design-based two-stage systematic sampling method that is used to estimate the number of cells in a specified region of an organ when the population is too large to count exhaustively. The fractionator counts the cells found in optical disectors that have been systematically sampled in serial sections. Heretofore, evaluations of optical fractionator performance have been made by performing tests on actual tissue sections, but it is difficult to evaluate the coefficient of error (CE), i.e. the precision of a population size estimate, by using biological tissue samples because they do not permit a comparison of an estimated CE with the true CE. However, computer simulation does permit making such comparisons while avoiding the observational biases inherent in working with biological tissue. This study is the first instance in which computer simulation has been applied to population size estimation by the optical fractionator. We used computer simulation to evaluate the performance of three CE estimators. The estimated CEs were evaluated in tests of three types of non-random cell population distribution and one random cell population distribution. The non-random population distributions varied by differences in 'intensity', i.e. the expected cell counts per disector, according to both section and disector location within the section. Two distributions were sinusoidal and one was linearly increasing; in all three there was a six-fold difference between the high and low intensities. The sinusoidal distributions produced either a peak or a depression of cell intensity at the centre of the simulated region. The linear cell intensity gradually increased from the beginning to the end of the region that contained the cells. The random population distribution had a constant intensity over the region. A 'test condition' was defined by its population distribution, the period between consecutive sampled sections and the spacing between consecutive

  6. Percent Errors in the Estimation of Demand for Secondary Items.

    DTIC Science & Technology

    1985-11-01

    percent errors, and the program change factor (PCF) to predict item demana during the procurement *’ leadtime (PROLT) ior the item. The PCF accounts for...type of demand it was. It may"-- have been demanded over two years ago or it may nave been a non-recurring demana . Since CC b only retains two years of...observed distributions could be compared with negative binomial distributions. For each item the computed ratio of actual demana to expected demand was

  7. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    NASA Astrophysics Data System (ADS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-12-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of l1-norm minimization using a standard linear programming algorithm is O(N3). We show that this cost can be reduced to O(N2) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach.

  8. Signal Estimation with Low Infinity Norm Error by Minimizing the Mean p Norm Error

    DTIC Science & Technology

    2014-03-19

    reconstruction in linear mixing systems with different error metrics,” in Inf. Theory Appl. Workshop, Feb. 2013. [2] A. Papoulis , Probability , Random...filters. I. INTRODUCTION A. Motivation The Gaussian distribution is widely used to describe the probability densities of various types of data, owing to its...the probabilities of the K Gaussian components. Note that ∑K k=1 sk = 1. A special case of the Gaussian mixture is Bernoulli- Gaussian (sparse

  9. Optimal full motion video registration with rigorous error propagation

    NASA Astrophysics Data System (ADS)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  10. Nonparametric variance estimation in the analysis of microarray data: a measurement error approach.

    PubMed

    Carroll, Raymond J; Wang, Yuedong

    2008-01-01

    This article investigates the effects of measurement error on the estimation of nonparametric variance functions. We show that either ignoring measurement error or direct application of the simulation extrapolation, SIMEX, method leads to inconsistent estimators. Nevertheless, the direct SIMEX method can reduce bias relative to a naive estimator. We further propose a permutation SIMEX method which leads to consistent estimators in theory. The performance of both SIMEX methods depends on approximations to the exact extrapolants. Simulations show that both SIMEX methods perform better than ignoring measurement error. The methodology is illustrated using microarray data from colon cancer patients.

  11. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  12. Instantaneous PIV/PTV-based pressure gradient estimation: a framework for error analysis and correction

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-08-01

    A framework for the exact determination of the pressure gradient estimation error in incompressible flows given erroneous velocimetry data is derived which relies on the calculation of the curl and divergence of the pressure gradient error over the domain and then the solution of a div-curl system to reconstruct the pressure gradient error field. In practice, boundary conditions for the div-curl system are unknown, and the divergence of the pressure gradient error requires approximation. The effect of zero pressure gradient error boundary conditions and approximating the divergence are evaluated using three flow cases: (1) a stationary Taylor vortex; (2) an advecting Lamb-Oseen vortex near a boundary; and (3) direct numerical simulation of the turbulent wake of a circular cylinder. The results show that the exact form of the pressure gradient error field reconstruction converges onto the exact values, within truncation and round-off errors, except for a small flow field region near the domain boundaries. It is also shown that the approximation for the divergence of the pressure gradient error field retains the fidelity of the reconstruction, even when velocity field errors are generated with substantial spatial variation. In addition to the utility of the proposed technique to improve the accuracy of pressure estimates, the reconstructed error fields provide spatially resolved estimates for instantaneous PIV/PTV-based pressure error.

  13. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  14. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  15. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  16. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  17. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method.

  18. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  19. Reduced wavefront reconstruction mean square error using optimal priors: algebraic analysis and simulations

    NASA Astrophysics Data System (ADS)

    Béchet, C.; Tallon, M.; Thiébaut, E.

    2008-07-01

    The turbulent wavefront reconstruction step in an adaptive optics system is an inverse problem. The Mean-Square Error (MSE) assessing the reconstruction quality is made of two terms, often called bias and variance. The latter is also commonly referred as the noise propagation. The aim of this paper is to investigate the evolution of these two error contributions when the number of parameters to be estimated becomes of the order of 10 4. Such dimensions are expected for the adaptive optics systems on the Extremely Large Telescopes. We provide an algebraic formalism to compare the MSE of Maximum Likelihood and Maximum A Posteriori linear reconstructors. A Generalized Singular Value Decomposition applied on the reconstructors theoretically enhances the differences between zonal and modal approaches, and demonstrates the gain in using Maximum A Posteriori method. Thanks to numerical simulations, we quantitatively study the evolution of the MSE contributions with respect to the pupil shape, to the outer scale of the turbulence, to the number of actuators and to the signal-to-noise ratio. Simulations results are consistent with previous noise propagation studies and with our algebraic analysis. Finally, using the Fractal Iterative Method as a Maximum A Posteriori reconstruction algorithm in our simulations, we demonstrate a possible reduction of the MSE of a factor 2 in large adaptive optics systems, for low signal-to-noise ratio.

  20. Real-time maximum a-posteriori image reconstruction for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.

    2015-08-01

    Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.

  1. Machine learning source separation using maximum a posteriori nonnegative matrix factorization.

    PubMed

    Gao, Bin; Woo, Wai Lok; Ling, Bingo W-K

    2014-07-01

    A novel unsupervised machine learning algorithm for single channel source separation is presented. The proposed method is based on nonnegative matrix factorization, which is optimized under the framework of maximum a posteriori probability and Itakura-Saito divergence. The method enables a generalized criterion for variable sparseness to be imposed onto the solution and prior information to be explicitly incorporated through the basis vectors. In addition, the method is scale invariant where both low and high energy components of a signal are treated with equal importance. The proposed algorithm is a more complete and efficient approach for matrix factorization of signals that exhibit temporal dependency of the frequency patterns. Experimental tests have been conducted and compared with other algorithms to verify the efficiency of the proposed method.

  2. A posteriori correction of camera characteristics from large image data sets.

    PubMed

    Afanasyev, Pavel; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J; Abrahams, Jan-Pieter; Portugal, Rodrigo V; Schatz, Michael; van Heel, Marin

    2015-06-11

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy ("cryo-EM"), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any "a priori" normalization routinely applied to the raw image data during collection ("flat field correction"). Our straightforward "a posteriori" correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images.

  3. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  4. The effect of errors-in-variables on variance component estimation

    NASA Astrophysics Data System (ADS)

    Xu, Peiliang

    2016-08-01

    Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain

  5. Improved estimates of coordinate error for molecular replacement

    SciTech Connect

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-11-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

  6. EIA Corrects Errors in Its Drilling Activity Estimates Series

    EIA Publications

    1998-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  7. Gap filling strategies and error in estimating annual soil respiration

    USDA-ARS?s Scientific Manuscript database

    Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

  8. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  9. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  10. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  11. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  12. Local estimation of posterior class probabilities to minimize classification errors.

    PubMed

    Guerrero-Curieses, Alicia; Cid-Sueiro, Jesús; Alaiz-Rodríguez, Rocío; Figueiras-Vidal, Aníbal R

    2004-03-01

    Decision theory shows that the optimal decision is a function of the posterior class probabilities. More specifically, in binary classification, the optimal decision is based on the comparison of the posterior probabilities with some threshold. Therefore, the most accurate estimates of the posterior probabilities are required near these decision thresholds. This paper discusses the design of objective functions that provide more accurate estimates of the probability values, taking into account the characteristics of each decision problem. We propose learning algorithms based on the stochastic gradient minimization of these loss functions. We show that the performance of the classifier is improved when these algorithms behave like sample selectors: samples near the decision boundary are the most relevant during learning.

  13. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  14. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  15. The estimation error covariance matrix for the ideal state reconstructor with measurement noise

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1988-01-01

    A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

  16. Round-Robin Analysis of Social Interaction: Exact and Estimated Standard Errors.

    ERIC Educational Resources Information Center

    Bond, Charles F., Jr.; Lashley, Brian R.

    1996-01-01

    The Social Relations model of D. A. Kenny estimates variances and covariances from a round-robin of two-person interactions. This paper presents a matrix formulation of the Social Relations model, using the formulation to derive exact and estimated standard errors for round-robin estimates of Social Relations parameters. (SLD)

  17. Development of a web-based simulator for estimating motion errors in linear motion stages

    NASA Astrophysics Data System (ADS)

    Khim, G.; Oh, J.-S.; Park, C.-H.

    2017-08-01

    This paper presents a web-based simulator for estimating 5-DOF motion errors in the linear motion stages. The main calculation modules of the simulator are stored on the server computer. The clients uses the client software to send the input parameters to the server and receive the computed results from the server. By using the simulator, we can predict performances such as 5-DOF motion errors, bearing and table stiffness by entering the design parameters in a design step before fabricating the stages. Motion errors are calculated using the transfer function method from the rail form errors which is the most dominant factor on the motion errors. To verify the simulator, the predicted motion errors are compared to the actually measured motion errors in the linear motion stage.

  18. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    ERIC Educational Resources Information Center

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  19. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    ERIC Educational Resources Information Center

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  20. MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

    2011-01-01

    MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

  1. The impact of estimation errors on evaluations of timber production opportunities.

    Treesearch

    Dennis L. Schweitzer

    1970-01-01

    Errors in estimating costs and return, the timing of harvests, and the cost of using funds can greatly affect the apparent desirability of investments in timber production. Partial derivatives are used to measure the impact of these errors on the predicted present net worth of potential investments in timber production. Graphs that illustrate the impact of each type...

  2. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  3. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  4. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  5. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  6. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  7. Error estimation in the neural network solution of ordinary differential equations.

    PubMed

    Filici, Cristian

    2010-06-01

    In this article a method of error estimation for the neural approximation of the solution of an Ordinary Differential Equation is presented. Some examples of the application of the method support the theory presented.

  8. The use of neural networks in identifying error sources in satellite-derived tropical SST estimates.

    PubMed

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%.

  9. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-04-01

    A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%). At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane fluxes.

  10. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  11. Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity and Efficient Estimators

    DTIC Science & Technology

    2012-09-27

    REPORT Quantum tomography via compressed sensing : error bounds, sample complexity and efficient estimators 14. ABSTRACT 16. SECURITY CLASSIFICATION OF...Box 12211 Research Triangle Park, NC 27709-2211 15. SUBJECT TERMS quantum tomography, compressed sensing Steven T Flammia, David Gross, Yi-Kai Liu... compressed sensing : error bounds, sample complexity and efficient estimators Report Title ABSTRACT Intuitively, if a density operator has small rank, then

  12. Linear constraint relations in biochemical reaction systems: II. Diagnosis and estimation of gross errors.

    PubMed

    van der Heijden, R T; Romein, B; Heijnen, J J; Hellinga, C; Luyben, K C

    1994-01-05

    Conservation equations derived from elemental balances, heat balances, and metabolic stoichiometry, can be used to constrain the values of conversion rates of relevant components. In the present work, their use will be discussed for detection and localization of significant errors of the following types: 1.At least one of the primary measurements has a significant error (gross measurement error).2.The system definition is incorrect: a component a.is not included in the system description.b.has a composition different from that specified.3.The specified variances are too small, resulting in a too-sensitive test.The error diagnosis technique presented here, is based on the following: given the conservation equations, for each set of measured rates, a vector of residuals of these equations can be constructed, of which the direction is related to the error source, as its length is a measure of the error size. The similarity of the directions of such a residual vector and certain compare vectors, each corresponding to a specific error source, is considered in a statistical test. If two compare vectors that result from different error sources have (almost) the same direction, errors of these types cannot be distinguished from each other. For each possible error in the primary measurements of flows and concentrations, the compare vector can be constructed a priori, thus allowing analysis beforehand, which errors can be observed. Therefore, the detectability of certain errors likely to occur can be insured by selecting a proper measurement set. The possibility of performing this analysis before experiments are carried out is an important advantage, providing a profound understanding of the detectability of errors. The characteristics of the method with respect to diagnosis of simultaneous errors and error size estimation are discussed and compared to those of the serial elimination method and the serial compensation strategy, published elsewhere. (c) 1994 John Wiley & Sons

  13. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  14. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  15. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    DOE PAGES

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; ...

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  16. Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates

    SciTech Connect

    Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.; Draper, Jeffrey

    2015-12-01

    Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.

  17. Estimation and implications of random errors in whole-body dosimetry for targeted radionuclide therapy.

    PubMed

    Flux, Glenn D; Guy, Matthew J; Beddows, Ruth; Pryor, Matthew; Flower, Maggie A

    2002-09-07

    For targeted radionuclide therapy, the level of activity to be administered is often determined from whole-body dosimetry performed on a pre-therapy tracer study. The largest potential source of error in this method is due to inconsistent or inaccurate activity retention measurements. The main aim of this study was to develop a simple method to quantify the uncertainty in the absorbed dose due to these inaccuracies. A secondary aim was to assess the effect of error propagation from the results of the tracer study to predictive absorbed dose estimates for the therapy as a result of using different radionuclides for each. Standard error analysis was applied to the MIRD schema for absorbed dose calculations. An equation was derived to describe the uncertainty in the absorbed dose estimate due solely to random errors in activity-time data, requiring only these data as input. Two illustrative examples are given. It is also shown that any errors present in the dosimetry calculations following the tracer study will propagate to errors in predictions made for the therapy study according to the ratio of the respective effective half-lives. If the therapy isotope has a much longer physical half-life than the tracer isotope (as is the case, for example, when using 123I as a tracer for 131I therapy) the propagation of errors can be significant. The equations derived provide a simple means to estimate two potentially large sources of error in whole-body absorbed dose calculations.

  18. Speech enhancement via two-stage dual tree complex wavelet packet transform with a speech presence probability estimator

    NASA Astrophysics Data System (ADS)

    Sun, Pengfei; Qin, Jun

    2017-02-01

    In this paper, a two-stage dual tree complex wavelet packet transform (DTCWPT) based speech enhancement algorithm has been proposed, in which a speech presence probability (SPP) estimator and a generalized minimum mean squared error (MMSE) estimator are developed. To overcome the drawback of signal distortions caused by down sampling of WPT, a two-stage analytic decomposition concatenating undecimated WPT (UWPT) and decimated WPT is employed. An SPP estimator in the DTCWPT domain is derived based on a generalized Gamma distribution of speech, and Gaussian noise assumption. The validation results show that the proposed algorithm can obtain enhanced perceptual evaluation of speech quality (PESQ), and segmental signal-to-noise ratio (SegSNR) at low SNR nonstationary noise, compared with other four state-of-the-art speech enhancement algorithms, including optimally modified LSA (OM-LSA), soft masking using a posteriori SNR uncertainty (SMPO), a posteriori SPP based MMSE estimation (MMSE-SPP), and adaptive Bayesian wavelet thresholding (BWT).

  19. A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.

    PubMed

    Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco

    2011-01-01

    The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion.

  20. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  1. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  2. How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?

    NASA Astrophysics Data System (ADS)

    Gebregiorgis, A. S.; Hossain, F.

    2014-12-01

    The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

  3. A function space approach to state and model error estimation for elliptic systems

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed.

  4. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    PubMed

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  5. Automatic Time Stepping with Global Error Control for Groundwater Flow Models

    SciTech Connect

    Tang, Guoping

    2008-09-01

    An automatic time stepping with global error control is proposed for the time integration of the diffusion equation to simulate groundwater flow in confined aquifers. The scheme is based on an a posteriori error estimate for the discontinuous Galerkin (dG) finite element methods. A stability factor is involved in the error estimate and it is used to adapt the time step and control the global temporal error for the backward difference method. The stability factor can be estimated by solving a dual problem. The stability factor is not sensitive to the accuracy of the dual solution and the overhead computational cost can be minimized by solving the dual problem using large time steps. Numerical experiments are conducted to show the application and the performance of the automatic time stepping scheme. Implementation of the scheme can lead to improvement in accuracy and efficiency for groundwater flow models.

  6. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    DOE PAGES

    Locatelli, R.; Bousquet, P.; Chevallier, F.; ...

    2013-10-08

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the

  7. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    SciTech Connect

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Rigby, M.; Saito, R.

    2013-10-08

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different

  8. Sparse representation discretization errors in multi-sensor radar target motion estimation

    NASA Astrophysics Data System (ADS)

    Azodi, Hossein; Siart, Uwe; Eibert, Thomas F.

    2017-09-01

    In a multi-sensor radar for the estimation of the targets motion states, more than one module of transmitter and receiver are utilized to estimate the positions and velocities of targets, also known as motion states. By applying the compressed sensing (CS) reconstruction algorithms, the surveillance space needs to be discretized. The effect of the additive errors due to the discretization are studied in this paper. The errors are considered as an additive noise in the well-known under-determined CS problem. By employing properties of these errors, analytical models for its average and variance are derived. Numerous simulations are carried out to verify the analytical model empirically. Furthermore, the probability density functions of discretization errors are estimated. The analytical model is useful for the optimization of the performance, the efficiency and the success rate in CS reconstruction for radar as well as many other applications.

  9. Improved maximum likelihood detection for mitigating fading estimation error in free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Lu; Wu, Zhiyong; Zhang, Yaoyu; Detian, Huang

    2013-01-01

    To mitigate the impact of the error between the estimated channel fading coefficient and the perfect fading coefficient on the bit error rate (BER), a priori conditional probability density function averaging the estimation error is proposed. Then, an improved maximum-likelihood (ML) symbol-by-symbol detection is derived for the free-space optical communication systems, which implement pilot symbol assisted modulation. To reduce complexity, a closed-form suboptimal improved ML detection is deduced using distribution approximation. Numerical results confirm that BER performance improvement can be reached by the improved ML detection, and that its suboptimal version performs as well as it does. Therefore, they both outperform classical ML detection, which doses not consider channel estimation error.

  10. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  11. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    NASA Astrophysics Data System (ADS)

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-01

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. We estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc-Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range of fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc-Zn. The recently generated pseudopotentials of Krogel et al. [Phys. Rev. B 93, 075143 (2016)] reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. [J. Chem. Phys. 129, 164115 (2008)] by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. For the Sc-Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.

  12. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  13. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  14. A function space approach to state and model error estimation for elliptic systems

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal esimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed. While this paper focuses on theoretical developments, applications arising in the area of large structure static shape determination are contained in a closely related paper (Rodriguez and Scheid, 1982).

  15. Estimating effective model parameters for heterogeneous unsaturated flow using error models for bias correction

    NASA Astrophysics Data System (ADS)

    Erdal, D.; Neuweiler, I.; Huisman, J. A.

    2012-06-01

    Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.

  16. The displacement estimation error back-propagation (DEEP) method for a multiple structural displacement monitoring system

    NASA Astrophysics Data System (ADS)

    Jeon, H.; Shin, J. U.; Myung, H.

    2013-04-01

    Visually servoed paired structured light system (ViSP) has been found to be useful in estimating 6-DOF relative displacement. The system is composed of two screens facing each other, each with one or two lasers, a 2-DOF manipulator and a camera. The displacement between two sides is estimated by observing positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive structures, the whole area should be partitioned and each ViSP module is placed in each partition in a cascaded manner. The estimated displacement between adjoining ViSPs is combined with the next partition so that the entire movement of the structure can be estimated. The multiple ViSPs, however, have a major problem that the error is propagated through the partitions. Therefore, a displacement estimation error back-propagation (DEEP) method which uses Newton-Raphson or gradient descent formulation inspired by the error back-propagation algorithm is proposed. In this method, the estimated displacement from the ViSP is updated using the error back-propagated from a fixed position. To validate the performance of the proposed method, various simulations and experiments have been performed. The results show that the proposed method significantly reduces the propagation error throughout the multiple modules.

  17. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    ERIC Educational Resources Information Center

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  18. Estimation of a cover-type change matrix from error-prone data

    Treesearch

    Steen Magnussen

    2009-01-01

    Coregistration and classification errors seriously compromise per-pixel estimates of land cover change. A more robust estimation of change is proposed in which adjacent pixels are grouped into 3x3 clusters and treated as a unit of observation. A complete change matrix is recovered in a two-step process. The diagonal elements of a change matrix are recovered from...

  19. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  20. Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes

    PubMed Central

    Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.

    2011-01-01

    A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions. PMID:21666841

  1. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  2. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  3. Estimating phase errors from pupil discontinuities from simulated on sky data: examples with VLT and Keck

    NASA Astrophysics Data System (ADS)

    Lamb, Masen; Correia, Carlos; Sauvage, Jean-François; Véran, Jean-Pierre; Andersen, David; Vigan, Arthur; Wizinowich, Peter; van Dam, Marcos; Mugnier, Laurent; Bond, Charlotte

    2016-07-01

    We propose and apply two methods for estimating phase discontinuities for two realistic scenarios on VLT and Keck. The methods use both phase diversity and a form of image sharpening. For the case of VLT, we simulate the `low wind effect' (LWE) which is responsible for focal plane errors in low wind and good seeing conditions. We successfully estimate the LWE using both methods, and show that using both methods both independently and together yields promising results. We also show the use of single image phase diversity in the LWE estimation, and show that it too yields promising results. Finally, we simulate segmented piston effects on Keck/NIRC2 images and successfully recover the induced phase errors using single image phase diversity. We also show that on Keck we can estimate both the segmented piston errors and any Zernike modes affiliated with the non-common path.

  4. The role of measurement error in estimating levels of physical activity.

    PubMed

    Ferrari, Pietro; Friedenreich, Christine; Matthews, Charles E

    2007-10-01

    Epidemiologic studies have demonstrated that physical inactivity is an important determinant of numerous chronic diseases. However, self-reported estimates of physical activity contain measurement errors responsible for attenuating relative risk estimates. A validation study conducted in 2002-2003 at the Alberta Cancer Board (Canada) included a physical activity questionnaire, four 7-day physical activity logs, and four sets of accelerometer data from 154 study subjects (51% women) aged 35-65 years. The authors used a measurement error model to evaluate validity of the different types of physical activity assessment, and the attenuation factors, after taking into account error correlations between self-reported measurements. The validity coefficients, which express the correlation between measured and true exposure, were higher for accelerometers (0.81, 95% confidence interval (CI): 0.76, 0.85) compared with the physical activity log (0.57, 95% CI: 0.47, 0.66) and questionnaire measurements (0.26, 95% CI: 0.12, 0.40). The estimate of the attenuation factor for questionnaires was 0.13 (95% CI: 0.05, 0.23). Accuracy of physical activity questionnaire measurements was higher for men than for women, for younger individuals, and for those with a lower body mass index. Because the degree of attenuation in relative risk estimates is substantial, after the role of error correlations was considered, validation studies quantifying the impact of measurement errors on physical activity estimates are essential to evaluate the impact of physical inactivity on health.

  5. Detecting Identity by Descent and Estimating Genotype Error Rates in Sequence Data

    PubMed Central

    Browning, Brian L.; Browning, Sharon R.

    2013-01-01

    Existing methods for identity by descent (IBD) segment detection were designed for SNP array data, not sequence data. Sequence data have a much higher density of genetic variants and a different allele frequency distribution, and can have higher genotype error rates. Consequently, best practices for IBD detection in SNP array data do not necessarily carry over to sequence data. We present a method, IBDseq, for detecting IBD segments in sequence data and a method, SEQERR, for estimating genotype error rates at low-frequency variants by using detected IBD. The IBDseq method estimates probabilities of genotypes observed with error for each pair of individuals under IBD and non-IBD models. The ratio of estimated probabilities under the two models gives a LOD score for IBD. We evaluate several IBD detection methods that are fast enough for application to sequence data (IBDseq, Beagle Refined IBD, PLINK, and GERMLINE) under multiple parameter settings, and we show that IBDseq achieves high power and accuracy for IBD detection in sequence data. The SEQERR method estimates genotype error rates by comparing observed and expected rates of pairs of homozygote and heterozygote genotypes at low-frequency variants in IBD segments. We demonstrate the accuracy of SEQERR in simulated data, and we apply the method to estimate genotype error rates in sequence data from the UK10K and 1000 Genomes projects. PMID:24207118

  6. A Study on Estimating the Aiming Angle Error of Millimeter Wave Radar for Automobile

    NASA Astrophysics Data System (ADS)

    Kuroda, Hiroshi; Okai, Fumihiko; Takano, Kazuaki

    The 76GHz millimeter wave radar has been developed for automotive application such as ACC (Adaptive Cruise Control) and CWS (Collision Warning System). The radar is FSK (Frequency Shift Keying) monopulse type. The radar transmits 2 frequencies in time-duplex manner, and measures distance and relative speed of targets. The monopulse feature detects the azimuth angle of targets without a scanning mechanism. Conventionally a radar unit is aimed mechanically, although self-aiming capability, to detect and correct the aiming angle error automatically, has been required. The new algorithm, which estimates the aiming angle error and vehicle speed sensor error simultaneously, has been proposed and tested. The algorithm is based on the relationship of relative speed and azimuth angle of stationary objects, and the least squares method is used for calculation. The algorithm is applied to measured data of the millimeter wave radar, resulting in aiming angle estimation error of less than 0.6 degree.

  7. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  8. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    PubMed

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias.

  9. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.

    PubMed

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended.

  10. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    PubMed Central

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  11. Effect of assay measurement error on parameter estimation in concentration-QTc interval modeling.

    PubMed

    Bonate, Peter L

    2013-01-01

    Linear mixed-effects models (LMEMs) of concentration-double-delta QTc intervals (QTc intervals corrected for placebo and baseline effects) assume that the concentration measurement error is negligible, which is an incorrect assumption. Previous studies have shown in linear models that independent variable error can attenuate the slope estimate with a corresponding increase in the intercept. Monte Carlo simulation was used to examine the impact of assay measurement error (AME) on the parameter estimates of an LMEM and nonlinear MEM (NMEM) concentration-ddQTc interval model from a 'typical' thorough QT study. For the LMEM, the type I error rate was unaffected by assay measurement error. Significant slope attenuation ( > 10%) occurred when the AME exceeded > 40% independent of the sample size. Increasing AME also decreased the between-subject variance of the slope, increased the residual variance, and had no effect on the between-subject variance of the intercept. For a typical analytical assay having an assay measurement error of less than 15%, the relative bias in the estimates of the model parameters and variance components was less than 15% in all cases. The NMEM appeared to be more robust to AME error as most parameters were unaffected by measurement error. Monte Carlo simulation was then used to determine whether the simulation-extrapolation method of parameter bias correction could be applied to cases of large AME in LMEMs. For analytical assays with large AME ( > 30%), the simulation-extrapolation method could correct biased model parameter estimates to near-unbiased levels.

  12. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information.

    PubMed

    Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S

    2016-02-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  13. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  14. The effect of sampling on estimates of lexical specificity and error rates.

    PubMed

    Rowland, Caroline F; Fletcher, Sarah L

    2006-11-01

    Studies based on naturalistic data are a core tool in the field of language acquisition research and have provided thorough descriptions of children's speech. However, these descriptions are inevitably confounded by differences in the relative frequency with which children use words and language structures. The purpose of the present work was to investigate the impact of sampling constraints on estimates of the productivity of children's utterances, and on the validity of error rates. Comparisons were made between five different sized samples of wh-question data produced by one child aged 2;8. First, we assessed whether sampling constraints undermined the claim (e.g. Tomasello, 2000) that the restricted nature of early child speech reflects a lack of adultlike grammatical knowledge. We demonstrated that small samples were equally likely to under- as overestimate lexical specificity in children's speech, and that the reliability of estimates varies according to sample size. We argued that reliable analyses require a comparison with a control sample, such as that from an adult speaker. Second, we investigated the validity of estimates of error rates based on small samples. The results showed that overall error rates underestimate the incidence of error in some rarely produced parts of the system and that analyses on small samples were likely to substantially over- or underestimate error rates in infrequently produced constructions. We concluded that caution must be used when basing arguments about the scope and nature of errors in children's early multi-word productions on analyses of samples of spontaneous speech.

  15. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the

  16. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  17. Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.

    2015-12-01

    For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which

  18. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the

  19. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  20. A Posteriori Study of a DNS Database Describing Super critical Binary-Species Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2012-01-01

    Currently, the modeling of supercritical-pressure flows through Large Eddy Simulation (LES) uses models derived for atmospheric-pressure flows. Those atmospheric-pressure flows do not exhibit the particularities of high densitygradient magnitude features observed both in experiments and simulations of supercritical-pressure flows in the case of two species mixing. To assess whether the current LES modeling is appropriate and if found not appropriate to propose higher-fidelity models, a LES a posteriori study has been conducted for a mixing layer that initially contains different species in the lower and upper streams, and where the initial pressure is larger than the critical pressure of either species. An initially-imposed vorticity perturbation promotes roll-up and a double pairing of four initial span-wise vortices into an ultimate vortex that reaches a transitional state. The LES equations consist of the differential conservation equations coupled with a real-gas equation of state, and the equation set uses transport properties depending on the thermodynamic variables. Unlike all LES models to date, the differential equations contain, additional to the subgrid scale (SGS) fluxes, a new SGS term that is a pressure correction in the momentum equation. This additional term results from filtering of Direct Numerical Simulation (DNS) equations, and represents the gradient of the difference between the filtered pressure and the pressure computed from the filtered flow field. A previous a priori analysis, using a DNS database for the same configuration, found this term to be of leading order in the momentum equation, a fact traced to the existence of high-densitygradient magnitude regions that populated the entire flow; in the study, models were proposed for the SGS fluxes as well as this new term. In the present study, the previously proposed constantcoefficient SGS-flux models of the a priori investigation are tested a posteriori in LES, devoid of or including, the

  1. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-10-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of

  2. Error estimation procedure for large dimensionality data with small sample sizes

    NASA Astrophysics Data System (ADS)

    Williams, Arnold; Wagner, Gregory

    2009-05-01

    Using multivariate data analysis to estimate the classification error rates and separability between sets of data samples is a useful tool for understanding the characteristics of data sets. By understanding the classifiability and separability of the data, one can better direct the appropriate resources and effort to achieve the desired performance. The following report describes our procedure for estimating the separability of given data sets. The multivariate tools described in this paper include calculating the intrinsic dimensionality estimates, Bayes error estimates, and the Friedman-Rafsky tests. These analysis techniques are based on previous work used to evaluate data for synthetic aperture radar (SAR) automatic target recognition (ATR), but the current work is unique in the methods used to analyze large dimensionality sets with a small number of samples. The results of this report show that our procedure can quantitatively measure the performance between two data sets in both the measure and feature space with the Bayes error estimator procedure and the Friedman- Rafsky test, respectively. Our procedure, which included the error estimation and Friedman-Rafsky test, is used to evaluate SAR data but can be used as effective ways to measure the classifiability of many other multidimensional data sets.

  3. Borrowing information across genes and experiments for improved error variance estimation in microarray data analysis.

    PubMed

    Ji, Tieming; Liu, Peng; Nettleton, Dan

    2012-01-01

    Statistical inference for microarray experiments usually involves the estimation of error variance for each gene. Because the sample size available for each gene is often low, the usual unbiased estimator of the error variance can be unreliable. Shrinkage methods, including empirical Bayes approaches that borrow information across genes to produce more stable estimates, have been developed in recent years. Because the same microarray platform is often used for at least several experiments to study similar biological systems, there is an opportunity to improve variance estimation further by borrowing information not only across genes but also across experiments. We propose a lognormal model for error variances that involves random gene effects and random experiment effects. Based on the model, we develop an empirical Bayes estimator of the error variance for each combination of gene and experiment and call this estimator BAGE because information is Borrowed Across Genes and Experiments. A permutation strategy is used to make inference about the differential expression status of each gene. Simulation studies with data generated from different probability models and real microarray data show that our method outperforms existing approaches.

  4. Computation of the factorized error covariance of the difference between correlated estimators

    NASA Technical Reports Server (NTRS)

    Wolff, Peter J.; Mohan, Srinivas N.; Stienon, Francis M.; Bierman, Gerald J.

    1990-01-01

    A state estimation problem where some of the measurements may be common to two or more data sets is considered. Two approaches for computing the error covariance of the difference between filtered estimates (for each data set) are discussed. The first algorithm is based on postprocessing of the Kalman gain profiles of two correlated estimators. It uses UD factors of the covariance of the relative error. The second algorithm uses a square root information filter applied to relative error analysis. In the absence of process noise, the square root information filter is computationally more efficient and more flexible than the Kalman gain (covariance update) method. Both the algorithms (covariance and information matrix based) are applied to a Venus orbiter simulation, and their performances are compared.

  5. Alpha's standard error (ASE): an accurate and precise confidence interval estimate.

    PubMed

    Duhachek, Adam; Lacobucci, Dawn

    2004-10-01

    This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation.

  6. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  7. Modelling of turbulent lifted jet flames using flamelets: a priori assessment and a posteriori validation

    NASA Astrophysics Data System (ADS)

    Ruan, Shaohong; Swaminathan, Nedunchezhian; Darbyshire, Oliver

    2014-03-01

    This study focuses on the modelling of turbulent lifted jet flames using flamelets and a presumed Probability Density Function (PDF) approach with interest in both flame lift-off height and flame brush structure. First, flamelet models used to capture contributions from premixed and non-premixed modes of the partially premixed combustion in the lifted jet flame are assessed using a Direct Numerical Simulation (DNS) data for a turbulent lifted hydrogen jet flame. The joint PDFs of mixture fraction Z and progress variable c, including their statistical correlation, are obtained using a copula method, which is also validated using the DNS data. The statistically independent PDFs are found to be generally inadequate to represent the joint PDFs from the DNS data. The effects of Z-c correlation and the contribution from the non-premixed combustion mode on the flame lift-off height are studied systematically by including one effect at a time in the simulations used for a posteriori validation. A simple model including the effects of chemical kinetics and scalar dissipation rate is suggested and used for non-premixed combustion contributions. The results clearly show that both Z-c correlation and non-premixed combustion effects are required in the premixed flamelets approach to get good agreement with the measured flame lift-off heights as a function of jet velocity. The flame brush structure reported in earlier experimental studies is also captured reasonably well for various axial positions. It seems that flame stabilisation is influenced by both premixed and non-premixed combustion modes, and their mutual influences.

  8. Biases in atmospheric CO2 estimates from correlated meteorology modeling errors

    NASA Astrophysics Data System (ADS)

    Miller, S. M.; Hayek, M. N.; Andrews, A. E.; Fung, I.; Liu, J.

    2015-03-01

    Estimates of CO2 fluxes that are based on atmospheric measurements rely upon a meteorology model to simulate atmospheric transport. These models provide a quantitative link between the surface fluxes and CO2 measurements taken downwind. Errors in the meteorology can therefore cause errors in the estimated CO2 fluxes. Meteorology errors that correlate or covary across time and/or space are particularly worrisome; they can cause biases in modeled atmospheric CO2 that are easily confused with the CO2 signal from surface fluxes, and they are difficult to characterize. In this paper, we leverage an ensemble of global meteorology model outputs combined with a data assimilation system to estimate these biases in modeled atmospheric CO2. In one case study, we estimate the magnitude of month-long CO2 biases relative to CO2 boundary layer enhancements and quantify how that answer changes if we either include or remove error correlations or covariances. In a second case study, we investigate which meteorological conditions are associated with these CO2 biases. In the first case study, we estimate uncertainties of 0.5-7 ppm in monthly-averaged CO2 concentrations, depending upon location (95% confidence interval). These uncertainties correspond to 13-150% of the mean afternoon CO2 boundary layer enhancement at individual observation sites. When we remove error covariances, however, this range drops to 2-22%. Top-down studies that ignore these covariances could therefore underestimate the uncertainties and/or propagate transport errors into the flux estimate. In the second case study, we find that these month-long errors in atmospheric transport are anti-correlated with temperature and planetary boundary layer (PBL) height over terrestrial regions. In marine environments, by contrast, these errors are more strongly associated with weak zonal winds. Many errors, however, are not correlated with a single meteorological parameter, suggesting that a single meteorological proxy is

  9. Modeling distribution of temporal sampling errors in area-time-averaged rainfall estimates

    NASA Astrophysics Data System (ADS)

    Gebremichael, Mekonnen; Krajewski, Witold F.

    2005-02-01

    In this paper, the authors examine models of probability distributions for sampling error in rainfall estimates obtained from discrete satellite sampling in time based on 5 years of 15-min radar rainfall data in the central United States. The sampling errors considered include all combinations of 3, 6, 12, or 24 h sampling of rainfall over 32, 64, 128, 256, or 512 km square domains, and 1, 5, or 30 day rainfall accumulations. Results of this study reveal that the sampling error distribution depends strongly on the rain rate; hence the conditional distribution of sampling error is more informative than its marginal distribution. The distribution of sampling error conditional on rain rate is strongly affected by the sampling interval. At sampling intervals of 3 or 6 h, the logistic distribution appears to fit the conditional sampling error quite well, while the shifted-gamma, shifted-weibull, shifted-lognormal, and normal distributions fit poorly. At sampling intervals of 12 or 24 h, the shifted-gamma, shifted-weibull, or shifted-lognormal distribution fit the conditional sampling error better than the logistics or normal distribution. These results are vital to understanding the accuracy of satellite rainfall products, for performing validation assessment of these products, and for analyzing the effects of rainfall-related errors in hydrological models.

  10. Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements

    NASA Astrophysics Data System (ADS)

    Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.

    2012-12-01

    This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.

  11. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error.

    PubMed

    Stenroos, Matti; Hauk, Olaf

    2013-11-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.

  12. Estimation of ozone with total ozone portable spectroradiometer instruments. I. Theoretical model and error analysis

    NASA Astrophysics Data System (ADS)

    Flynn, Lawrence E.; Labow, Gordon J.; Beach, Robert A.; Rawlins, Michael A.; Flittner, David E.

    1996-10-01

    Inexpensive devices to measure solar UV irradiance are available to monitor atmospheric ozone, for example, total ozone portable spectroradiometers (TOPS instruments). A procedure to convert these measurements into ozone estimates is examined. For well-characterized filters with 7-nm FWHM bandpasses, the method provides ozone values (from 304- and 310-nm channels) with less than 0.4 error attributable to inversion of the theoretical model. Analysis of sensitivity to model assumptions and parameters yields estimates of 3 bias in total ozone results with dependence on total ozone and path length. Unmodeled effects of atmospheric constituents and instrument components can result in additional 2 errors.

  13. Error estimate of Taylor's frozen-in flow hypothesis in the spectral domain

    NASA Astrophysics Data System (ADS)

    Narita, Yasuhito

    2017-03-01

    The quality of Taylor's frozen-in flow hypothesis can be measured by estimating the amount of the fluctuation energy mapped from the streamwise wavenumbers onto the Doppler-shifted frequencies in the spectral domain. For a random sweeping case with a Gaussian variation of the large-scale flow, the mapping quality is expressed by the error function which depends on the mean flow speed, the sweeping velocity, the frequency bin, and the frequency of interest. Both hydrodynamic and magnetohydrodynamic treatments are presented on the error estimate of Taylor's hypothesis with examples from the solar wind measurements.

  14. Estimation of bias errors in measured airplane responses using maximum likelihood method

    NASA Technical Reports Server (NTRS)

    Klein, Vladiaslav; Morgan, Dan R.

    1987-01-01

    A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

  15. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  16. Estimation of Species Identification Error: Implications for Raptor Migration Counts and Trend Estimation

    Treesearch

    J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull

    2010-01-01

    One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...

  17. Assumption-free estimation of the genetic contribution to refractive error across childhood

    PubMed Central

    St Pourcain, Beate; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Williams, Cathy

    2015-01-01

    Purpose Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Methods Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). Results The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old. Conclusions Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in

  18. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Certainty in Heisenberg's uncertainty principle: Revisiting definitions for estimation errors and disturbance

    NASA Astrophysics Data System (ADS)

    Dressel, Justin; Nori, Franco

    2014-02-01

    We revisit the definitions of error and disturbance recently used in error-disturbance inequalities derived by Ozawa and others by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on an implicit assumption that is generally incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-semidefinite joint quasiprobability distribution. For unbiased measurements, the error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to directly measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect distributional estimations can have nonzero error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that is appropriate for ensembles being probed by all outcomes of an apparatus. To reconnect with the discussion of Heisenberg, we suggest alternative definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation.

  20. Effect of unrepresented model errors on estimated soil hydraulic material properties

    NASA Astrophysics Data System (ADS)

    Jaumann, Stefan; Roth, Kurt

    2017-09-01

    Unrepresented model errors influence the estimation of effective soil hydraulic material properties. As the required model complexity for a consistent description of the measurement data is application dependent and unknown a priori, we implemented a structural error analysis based on the inversion of increasingly complex models. We show that the method can indicate unrepresented model errors and quantify their effects on the resulting material properties. To this end, a complicated 2-D subsurface architecture (ASSESS) was forced with a fluctuating groundwater table while time domain reflectometry (TDR) and hydraulic potential measurement devices monitored the hydraulic state. In this work, we analyze the quantitative effect of unrepresented (i) sensor position uncertainty, (ii) small scale-heterogeneity, and (iii) 2-D flow phenomena on estimated soil hydraulic material properties with a 1-D and a 2-D study. The results of these studies demonstrate three main points: (i) the fewer sensors are available per material, the larger is the effect of unrepresented model errors on the resulting material properties. (ii) The 1-D study yields biased parameters due to unrepresented lateral flow. (iii) Representing and estimating sensor positions as well as small-scale heterogeneity decreased the mean absolute error of the volumetric water content data by more than a factor of 2 to 0. 004.

  1. Practical error estimation in zoom-in and truncated tomography reconstructions

    SciTech Connect

    Xiao Xianghui; De Carlo, Francesco; Stock, Stuart

    2007-06-15

    Synchrotron-based microtomography provides high resolution, but the resolution in large samples is often limited by the detector field of view and the pixel size. For some samples, only a small region of interest is relevant and local tomography is a powerful approach for retaining high resolution. Two methods are truncated tomography and zoom-in tomography. In this article we use existing theoretical results to estimate the error present in truncated and zoom-in tomographic reconstructions. These errors agree with the errors calculated from exact tomographic reconstructions. We argue in a heuristic manner why zoom-in tomography is superior to the truncated tomography in terms of the reconstruction error. However, the theoretical formula is not usable in practice because it requires the complete high-resolution reconstruction to be known. To solve this problem we proposed a practical method for estimating the error in zoom-in and truncated tomographies. The results using this estimation method are in very good agreement with our experimental results.

  2. Practical error estimation in zoom-in and truncated tomography reconstructions.

    SciTech Connect

    Xiao, X.; De Carlo, F.; Stock, S.; X-Ray Science Division

    2007-06-01

    Synchrotron-based microtomography provides high resolution, but the resolution in large samples is often limited by the detector field of view and the pixel size. For some samples, only a small region of interest is relevant and local tomography is a powerful approach for retaining high resolution. Two methods are truncated tomography and zoom-in tomography. In this article we use existing theoretical results to estimate the error present in truncated and zoom-in tomographic reconstructions. These errors agree with the errors calculated from exact tomographic reconstructions. We argue in a heuristic manner why zoom-in tomography is superior to the truncated tomography in terms of the reconstruction error. However, the theoretical formula is not usable in practice because it requires the complete high-resolution reconstruction to be known. To solve this problem we proposed a practical method for estimating the error in zoom-in and truncated tomographies. The results using this estimation method are in very good agreement with our experimental results.

  3. On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

    NASA Astrophysics Data System (ADS)

    Huffman, G. J.

    2013-12-01

    Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate

  4. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the

  5. Comparison of Errors of 35 Weight Estimation Formulae in a Standard Collective

    PubMed Central

    Hoopmann, M.; Kagan, K. O.; Sauter, A.; Abele, H.; Wagner, P.

    2016-01-01

    Issue: The estimation of foetal weight is an integral part of prenatal care and obstetric routine. In spite of its known susceptibility to errors in cases of underweight or overweight babies, important obstetric decisions depend on it. In the present contribution we have examined the accuracy and error distribution of 35 weight estimation formulae within the normal weight range of 2500–4000 g. The aim of the study was to identify the weight estimation formulae with the best possible correspondence to the requirements of clinical routine. Materials and Methods: 35 clinically established weight estimation formulae were analysed in 3416 foetuses with weights between 2500 and 4000 g. For this we determined and compared the mean percentage error (MPE), the mean absolute percentage error (MAPE), and the proportions of estimates within the error ranges of 5, 10, 20 and 30 %. In addition, separate regression lines were calculated for the relationship between estimated and actual birth weights for the weight range 2500–4000 g. The formulae were thus examined for possible inhomogeneities. Results: The lowest MPE were achieved with the Hadlock III and V formulae (0.8 %, STW 9.2 % or, respectively, −0.8 %, STW 10.0 %). The lowest absolute error (6.6 %) as well as the most favourable frequency distribution in cases below 5 % and 10 % error (43.9 and 77.5) were seen for the Halaska formula. In graphic representations of the regression lines, 16 formulae revealed a weight overestimation in the lower weight range and an underestimation in the upper range. 14 formulae gave underestimations and merely 5 gave overestimations over the entire tested weight range. Conclusion: The majority of the tested formulae gave underestimations of the actual birth weight over the entire weight range or at least in the upper weight range. This result supports the current strategy of a two-stage weight estimation in which a formula is first chosen after a pre-estimation of

  6. Estimation of smoothing error in SBUV profile and total ozone retrieval

    NASA Astrophysics Data System (ADS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-12-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  7. Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-01-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  8. Estimation of errors in diffraction data measured by CCD area detectors

    PubMed Central

    Waterman, David; Evans, Gwyndaf

    2010-01-01

    Current methods for diffraction-spot integration from CCD area detectors typically underestimate the errors in the measured intensities. In an attempt to understand fully and identify correctly the sources of all contributions to these errors, a simulation of a CCD-based area-detector module has been produced to address the problem of correct handling of data from such detectors. Using this simulation, it has been shown how, and by how much, measurement errors are underestimated. A model of the detector statistics is presented and an adapted summation integration routine that takes this into account is shown to result in more realistic error estimates. In addition, the effect of correlations between pixels on two-dimensional profile fitting is demonstrated and the problems surrounding improvements to profile-fitting algorithms are discussed. In practice, this requires knowledge of the expected correlation between pixels in the image. PMID:27006649

  9. Quantification of residual dose estimation error on log file-based patient dose calculation.

    PubMed

    Katsuta, Yoshiyuki; Kadoya, Noriyuki; Fujita, Yukio; Shimizu, Eiji; Matsunaga, Kenichi; Matsushita, Haruo; Majima, Kazuhiro; Jingu, Keiichi

    2016-05-01

    The log file-based patient dose estimation includes a residual dose estimation error caused by leaf miscalibration, which cannot be reflected on the estimated dose. The purpose of this study is to determine this residual dose estimation error. Modified log files for seven head-and-neck and prostate volumetric modulated arc therapy (VMAT) plans simulating leaf miscalibration were generated by shifting both leaf banks (systematic leaf gap errors: ±2.0, ±1.0, and ±0.5mm in opposite directions and systematic leaf shifts: ±1.0mm in the same direction) using MATLAB-based (MathWorks, Natick, MA) in-house software. The generated modified and non-modified log files were imported back into the treatment planning system and recalculated. Subsequently, the generalized equivalent uniform dose (gEUD) was quantified for the definition of the planning target volume (PTV) and organs at risks. For MLC leaves calibrated within ±0.5mm, the quantified residual dose estimation errors that obtained from the slope of the linear regression of gEUD changes between non- and modified log file doses per leaf gap are in head-and-neck plans 1.32±0.27% and 0.82±0.17Gy for PTV and spinal cord, respectively, and in prostate plans 1.22±0.36%, 0.95±0.14Gy, and 0.45±0.08Gy for PTV, rectum, and bladder, respectively. In this work, we determine the residual dose estimation errors for VMAT delivery using the log file-based patient dose calculation according to the MLC calibration accuracy. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  10. Evaluating measurement error in estimates of worker exposure assessed in parallel by personal and biological monitoring.

    PubMed

    Symanski, Elaine; Greeson, Nicole M H; Chan, Wenyaw

    2007-02-01

    While studies indicate that the attenuating effects of imperfectly measured exposure can be substantial, they have not had the requisite data to compare methods of assessing exposure for the same individuals monitored over common time periods. We examined measurement error in multiple exposure measures collected in parallel on 32 groups of workers. Random-effects models were applied under both compound symmetric and exponential correlation structures. Estimates of the within- and between-worker variances were used to contrast the attenuation bias in an exposure-response relationship that would be expected using an individual-based exposure assessment for different exposure measures on the basis of the intra-class correlation coefficient (ICC). ICC estimates ranged widely, indicative of a great deal of measurement error in some exposure measures while others contained very little. There was generally less attenuation in the biomarker data as compared to measurements obtained by personal sampling and, among biomarkers, for those with longer half-lives. The interval ICC estimates were often-times wide, suggesting a fair amount of imprecision in the point estimates. Ignoring serial correlation tended to over estimate the ICC values. Although personal sampling results were typically characterized by more intra-individual variability than inter-individual variability when compared to biological measurements, both types of data provided examples of exposure measures fraught with error. Our results also indicated substantial imprecision in the estimates of exposure measurement error, suggesting that greater emphasis needs to be given to studies that collect sufficient data to better characterize the attenuating effects of an error-prone exposure measure.

  11. Error estimates of triangular finite elements under a weak angle condition

    NASA Astrophysics Data System (ADS)

    Mao, Shipeng; Shi, Zhongci

    2009-08-01

    In this note, by analyzing the interpolation operator of Girault and Raviart given in [V. Girault, P.A. Raviart, Finite element methods for Navier-Stokes equations, Theory and algorithms, in: Springer Series in Computational Mathematics, Springer-Verlag, Berlin,1986] over triangular meshes, we prove optimal interpolation error estimates for Lagrange triangular finite elements of arbitrary order under the maximal angle condition in a unified and simple way. The key estimate is only an application of the Bramble-Hilbert lemma.

  12. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    PubMed

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  13. A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images

    NASA Astrophysics Data System (ADS)

    Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong

    2010-10-01

    Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.

  14. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  15. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    NASA Astrophysics Data System (ADS)

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-06-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.

  16. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach.

    PubMed

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-06-11

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.

  17. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    PubMed Central

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  18. Nuclear power plant fault-diagnosis using neural networks with error estimation

    SciTech Connect

    Kim, K.; Bartlett, E.B.

    1994-12-31

    The assurance of the diagnosis obtained from a nuclear power plant (NPP) fault-diagnostic advisor based on artificial neural networks (ANNs) is essential for the practical implementation of the advisor to fault detection and identification. The objectives of this study are to develop an error estimation technique (EET) for diagnosis validation and apply it to the NPP fault-diagnostic advisor. Diagnosis validation is realized by estimating error bounds on the advisor`s diagnoses. The 22 transients obtained from the Duane Arnold Energy Center (DAEC) training simulator are used for this research. The results show that the NPP fault-diagnostic advisor are effective at producing proper diagnoses on which errors are assessed for validation and verification purposes.

  19. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, Elena

    2016-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often 2 years, which may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever the function form, such models are generally parameterised by minimising the mean square error, which assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, which penalises the overpredictions more. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the country of Italy. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  20. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  1. Estimate error of frequency-dependent Q introduced by linear regression and its nonlinear implementation

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing

    2016-02-01

    The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.

  2. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  3. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  4. Error estimates for approximate dynamic systems. [linear and nonlinear control systems of different dimensions

    NASA Technical Reports Server (NTRS)

    Gunderson, R. W.; George, J. H.

    1974-01-01

    Two approaches are investigated for obtaining estimates on the error between approximate and exact solutions of dynamic systems. The first method is primarily useful if the system is nonlinear and of low dimension. The second requires construction of a system of v-functions but is useful for higher dimensional systems, either linear or nonlinear.

  5. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  6. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  7. Error estimation for reconstruction of neuronal spike firing from fast calcium imaging.

    PubMed

    Liu, Xiuli; Lv, Xiaohua; Quan, Tingwei; Zeng, Shaoqun

    2015-02-01

    Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Calcium transients reflect neuronal spike patterns allowing for spike train reconstructed from calcium traces. The key to judging spiking train authenticity is error estimation. However, due to the lack of an appropriate mathematical model to adequately describe this spike-calcium relationship, little attention has been paid to quantifying error ranges of the reconstructed spike results. By turning attention to the data characteristics close to the reconstruction rather than to a complex mathematic model, we have provided an error estimation method for the reconstructed neuronal spiking from calcium imaging. Real false-negative and false-positive rates of 10 experimental Ca(2+) traces were within the estimated error ranges and confirmed that this evaluation method was effective. Estimation performance of the reconstruction of spikes from calcium transients within a neuronal population demonstrated a reasonable evaluation of the reconstructed spikes without having real electrical signals. These results suggest that our method might be valuable for the quantification of research based on reconstructed neuronal activity, such as to affirm communication between different neurons.

  8. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  9. Error estimates for approximate dynamic systems. [linear and nonlinear control systems of different dimensions

    NASA Technical Reports Server (NTRS)

    Gunderson, R. W.; George, J. H.

    1974-01-01

    Two approaches are investigated for obtaining estimates on the error between approximate and exact solutions of dynamic systems. The first method is primarily useful if the system is nonlinear and of low dimension. The second requires construction of a system of v-functions but is useful for higher dimensional systems, either linear or nonlinear.

  10. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  11. Error Estimation Techniques to Refine Overlapping Aerial Image Mosaic Processes via Detected Parameters

    ERIC Educational Resources Information Center

    Bond, William Glenn

    2012-01-01

    In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

  12. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  13. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  14. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  15. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  16. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  17. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    ERIC Educational Resources Information Center

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  18. A Posteriori Analysis of Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    SciTech Connect

    Donald Estep; Michael Holst; Simon Tavener

    2010-02-08

    This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.

  19. Error and bias in under-5 mortality estimates derived from birth histories with small sample sizes.

    PubMed

    Dwyer-Lindgren, Laura; Gakidou, Emmanuela; Flaxman, Abraham; Wang, Haidong

    2013-07-26

    Estimates of under-5 mortality at the national level for countries without high-quality vital registration systems are routinely derived from birth history data in censuses and surveys. Subnational or stratified analyses of under-5 mortality could also be valuable, but the usefulness of under-5 mortality estimates derived from birth histories from relatively small samples of women is not known. We aim to assess the magnitude and direction of error that can be expected for estimates derived from birth histories with small samples of women using various analysis methods. We perform a data-based simulation study using Demographic and Health Surveys. Surveys are treated as populations with known under-5 mortality, and samples of women are drawn from each population to mimic surveys with small sample sizes. A variety of methods for analyzing complete birth histories and one method for analyzing summary birth histories are used on these samples, and the results are compared to corresponding true under-5 mortality. We quantify the expected magnitude and direction of error by calculating the mean error, mean relative error, mean absolute error, and mean absolute relative error. All methods are prone to high levels of error at the smallest sample size with no method performing better than 73% error on average when the sample contains 10 women. There is a high degree of variation in performance between the methods at each sample size, with methods that contain considerable pooling of information generally performing better overall. Additional stratified analyses suggest that performance varies for most methods according to the true level of mortality and the time prior to survey. This is particularly true of the summary birth history method as well as complete birth history methods that contain considerable pooling of information across time. Performance of all birth history analysis methods is extremely poor when used on very small samples of women, both in terms of

  20. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; ...

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr₋1 in the 1960s to 0.3 Pg C yr₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr₋1 in the 1960s to almost 1.0 Pg C yr₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the

  1. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    SciTech Connect

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr₋1 in the 1960s to 0.3 Pg C yr₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr₋1 in the 1960s to almost 1.0 Pg C yr₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half

  2. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  3. Block-Regularized m × 2 Cross-Validated Estimator of the Generalization Error.

    PubMed

    Wang, Ruibo; Wang, Yu; Li, Jihong; Yang, Xingli; Yang, Jing

    2017-02-01

    A cross-validation method based on [Formula: see text] replications of two-fold cross validation is called an [Formula: see text] cross validation. An [Formula: see text] cross validation is used in estimating the generalization error and comparing of algorithms' performance in machine learning. However, the variance of the estimator of the generalization error in [Formula: see text] cross validation is easily affected by random partitions. Poor data partitioning may cause a large fluctuation in the number of overlapping samples between any two training (test) sets in [Formula: see text] cross validation. This fluctuation results in a large variance in the [Formula: see text] cross-validated estimator. The influence of the random partitions on variance becomes serious as [Formula: see text] increases. Thus, in this study, the partitions with a restricted number of overlapping samples between any two training (test) sets are defined as a block-regularized partition set. The corresponding cross validation is called block-regularized [Formula: see text] cross validation ([Formula: see text] BCV). It can effectively reduce the influence of random partitions. We prove that the variance of the [Formula: see text] BCV estimator of the generalization error is smaller than the variance of [Formula: see text] cross-validated estimator and reaches the minimum in a special situation. An analytical expression of the variance can also be derived in this special situation. This conclusion is validated through simulation experiments. Furthermore, a practical construction method of [Formula: see text] BCV by a two-level orthogonal array is provided. Finally, a conservative estimator is proposed for the variance of estimator of the generalization error.

  4. Large area aggregation and mean-squared prediction error estimation for LACIE yield and production forecasts. [wheat

    NASA Technical Reports Server (NTRS)

    Chhikara, R. S.; Feiveson, A. H. (Principal Investigator)

    1979-01-01

    Aggregation formulas are given for production estimation of a crop type for a zone, a region, and a country, and methods for estimating yield prediction errors for the three areas are described. A procedure is included for obtaining a combined yield prediction and its mean-squared error estimate for a mixed wheat pseudozone.

  5. Estimation of Sampling Errors and Scale Parameters Using Rainfall Data Analyses.

    NASA Astrophysics Data System (ADS)

    Soman, Vishwas V.

    Estimation of the sampling error in the rainfall measurement is an important issue because the accuracy of these measurements can influence the accuracy of the results from global circulation models (GCMs). This study addressed the issue of the sampling errors in the rainfall measurements from space using the statistical analyses of the rainfall data. The rainfall data collected during 1988, in the vicinity of Darwin, Australia, were analyzed in this study. The statistical analyses were conducted in one, two, and three dimensions. One dimensional analyses were performed on area averaged time series of land, ocean, and combined precipitation of Darwin I and Darwin II subsets. A strong diurnal signal was detected from periodograms and correlograms. Periodograms and correlograms also indicated the presence of the semidiurnal cycle. Simulated sampling error studies conducted using area averaged precipitation time series for Darwin I and II, indicate that the sampling errors range from 3 to 20% of the mean for sampling intervals from 5 to 12 hr. Removal of the semidiurnal cycle from the data reduced the errors by about 40 to 50%. Sampling errors were as high as 60% for sampling interval of 24 hr. In this case, the removal of diurnal cycle from the data, reduced the sampling errors by about 30 to 40%. Two dimensional rainfall fields were obtained by averaging the data along West-East, North-South, and along Time axes. Two dimensional periodograms estimated for these fields show the diurnal and semidiurnal cycle very clearly. The variations in the data are primarily in time. The rainfall fields were found to be almost isotropic in space. In three dimensional analyses, periodograms were obtained using a three dimensional Fourier transform which were used to obtain the sampling errors using North-Nakamoto method. The sampling errors range from 5 to 30% for sampling intervals from 5 to 13 hr. Significant increase in the sampling errors can be noticed for sampling interval of

  6. Estimating tree biomass regressions and their error, proceedings of the workshop on tree biomass regression functions and their contribution to the error

    Treesearch

    Eric H. Wharton; Tiberius Cunia

    1987-01-01

    Proceedings of a workshop co-sponsored by the USDA Forest Service, the State University of New York, and the Society of American Foresters. Presented were papers on the methodology of sample tree selection, tree biomass measurement, construction of biomass tables and estimation of their error, and combining the error of biomass tables with that of the sample plots or...

  7. Distributed bounded-error state estimation based on practical robust positive invariance

    NASA Astrophysics Data System (ADS)

    Riverso, Stefano; Rubini, Daria; Ferrari-Trecate, Giancarlo

    2015-11-01

    We propose a state estimator for linear discrete-time systems composed by coupled subsystems affected by bounded disturbances. The architecture is distributed in the sense that each subsystem is equipped with a local state estimator that exploits suitable pieces of information from parent subsystems. Furthermore, each local estimator reconstructs the state of the corresponding subsystem only. Different from methods based on moving horizon estimation, our approach does not require the online solution to optimisation problems. Our state estimation scheme, which is based on the notion of practical robust positive invariance, also guarantees satisfaction of constraints on local estimation errors and it can be updated with a limited computational effort when subsystems are added or removed.

  8. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  9. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  10. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2016-06-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  11. Improving MIMO-OFDM decision-directed channel estimation by utilizing error-correcting codes

    NASA Astrophysics Data System (ADS)

    Beinschob, P.; Lieberei, M.; Zölzer, U.

    2009-05-01

    In this paper a decision-directed Multiple-Input Multiple-Output (MIMO) channel tracking algorithm is enhanced to raise the channel estimate accuracy. While DDCE is prone to error propagation the enhancement employs channel decoding in the tracking process. Therefore, a quantized block of symbols is checked on consistency via the channel decoder, possibly corrected and then used. This yields a more robust tracking of the channel in terms of bit error rate and improves the channel estimate under certain conditions. Equalization is performed to prove the feasibility of the obtained channel estimate. Therefore a combined signal consisting of data and pilot symbols is sent. Adaptive filters are applied to exploit correlations in time, frequency and spatial domain. By using good error-correcting coding schemes like Turbo Codes or Low Density Parity Check (LDPC) codes, adequate channel estimates can be acquired even at low signal to noise ratios (SNR). The proposed algorithm among two others is applied for channel estimation and equalization and results are compared.

  12. Stochastic error whitening algorithm for linear filter estimation with noisy data.

    PubMed

    Rao, Yadunandana N; Erdogmus, Deniz; Rao, Geetha Y; Principe, Jose C

    2003-01-01

    Mean squared error (MSE) has been the most widely used tool to solve the linear filter estimation or system identification problem. However, MSE gives biased results when the input signals are noisy. This paper presents a novel stochastic gradient algorithm based on the recently proposed error whitening criterion (EWC) to tackle the problem of linear filter estimation in the presence of additive white disturbances. We will briefly motivate the theory behind the new criterion and derive an online stochastic gradient algorithm. Convergence proof of the stochastic gradient algorithm is derived making mild assumptions. Further, we will propose some extensions to the stochastic gradient algorithm to ensure faster, step-size independent convergence. We will perform extensive simulations and compare the results with MSE as well as total-least squares in a parameter estimation problem. The stochastic EWC algorithm has many potential applications. We will use this in designing robust inverse controllers with noisy data.

  13. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  14. A fast algorithm for the estimation of statistical error in DNS (or experimental) time averages

    NASA Astrophysics Data System (ADS)

    Russo, Serena; Luchini, Paolo

    2017-10-01

    Time- and space-averaging of the instantaneous results of DNS (or experimental measurements) represent a standard final step, necessary for the estimation of their means or correlations or other statistical properties. These averages are necessarily performed over a finite time and space window, and are therefore more correctly just estimates of the 'true' statistical averages. The choice of the appropriate window size is most often subjectively based on individual experience, but as subtler statistics enter the focus of investigation, an objective criterion becomes desirable. Here a modification of the classical estimator of averaging error of finite time series, i.e. 'batch means' algorithm, will be presented, which retains its speed while removing its biasing error. As a side benefit, an automatic determination of batch size is also included. Examples will be given involving both an artificial time series of known statistics and an actual DNS of turbulence.

  15. Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates

    SciTech Connect

    Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

    2009-08-01

    Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

  16. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  17. An analysis of errors in special sensor microwave imager evaporation estimates over the global oceans

    NASA Technical Reports Server (NTRS)

    Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.

    1993-01-01

    The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.

  18. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.

    PubMed

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-05-21

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.

  19. Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, Siebren

    2016-08-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  20. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  1. Estimating Pole/Zero Errors in GSN-IU Network Calibration Metadata

    NASA Astrophysics Data System (ADS)

    Ringler, A. T.; Hutt, C. R.; Bolton, H. F.; Storm, T.; Gee, L. S.

    2010-12-01

    Converting the voltage output of a seismometer into ground motion requires correction of the data using a description of the instrument’s response. For the Global Seismographic Network (GSN), as well as many other networks, this instrument response is represented as a Laplace pole/zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. (Many GSN stations are operated by IRIS and USGS with network code “IU”.) This Laplace representation assumes that the seismometer behaves as a perfectly linear system, with temporal changes described adequately through multiple epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We developed an iterative three-step method to estimate instrument response model parameters (poles, zeros, and sensitivity and normalization parameters) and their associated errors using random calibration signals. First, we solve a coarse non-linear inverse problem using a least squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records. Second, we solve a non-linear parameter estimation problem by an iterative method to obtain the least squares best-fit Laplace pole/zero model. Third, by applying the central limit theorem we estimate the errors in this pole/zero model by solving the inverse problem at each frequency in a 2/3rds-octave band centered at each best-fit pole/zero frequency. This procedure yields error estimates of the >99% confidence interval. We demonstrate this method by applying it to a number of recent IU network calibration records.

  2. Estimation methods with ordered exposure subject to measurement error and missingness in semi-ecological design

    PubMed Central

    2012-01-01

    Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS) for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM) algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS), and the constrained expectation and maximization (CEM). We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their exposures observed with error

  3. Determination of quantitative trait variants by concordance via application of the a posteriori granddaughter design to the U.S. Holstein population

    USDA-ARS?s Scientific Manuscript database

    Experimental designs that exploit family information can provide substantial predictive power in quantitative trait variant discovery projects. Concordance between quantitative trait locus genotype as determined by the a posteriori granddaughter design and marker genotype was determined for 29 trai...

  4. A variational method for finite element stress recovery and error estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Macy, S. C.

    1993-01-01

    A variational method for obtaining smoothed stresses from a finite element derived nonsmooth stress field is presented. The method is based on minimizing a functional involving discrete least-squares error plus a penalty constraint that ensures smoothness of the stress field. An equivalent accuracy criterion is developed for the smoothing analysis which results in a C sup 1-continuous smoothed stress field possessing the same order of accuracy as that found at the superconvergent optimal stress points of the original finite element analysis. Application of the smoothing analysis to residual error estimation is also demonstrated.

  5. WAVELET-BASED BAYESIAN ESTIMATION OF PARTIALLY LINEAR REGRESSION MODELSWITH LONG MEMORY ERRORS

    PubMed Central

    Ko, Kyungduk; Qu, Leming; Vannucci, Marina

    2013-01-01

    In this paper we focus on partially linear regression models with long memory errors, and propose a wavelet-based Bayesian procedure that allows the simultaneous estimation of the model parameters and the nonparametric part of the model. Employing discrete wavelet transforms is crucial in order to simplify the dense variance-covariance matrix of the long memory error. We achieve a fully Bayesian inference by adopting a Metropolis algorithm within a Gibbs sampler. We evaluate the performances of the proposed method on simulated data. In addition, we present an application to Northern hemisphere temperature data, a benchmark in the long memory literature. PMID:23946613

  6. Extended Scene SH Wavefront Sensor Algorithm: Minimization of Scene Content Dependent Shift Estimation Errors

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    Adaptive Periodic-Correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high-accuracy even when the sub-images in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift-estimate error of the APC algorithm has a component that depends on the content of extended-scene. In this paper we assess the amount of that error and propose a method to minimize it.

  7. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  8. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1977-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are obtained. The approach is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. A general representation for optimum estimates and recursive equations for minimum mean squared error (MMSE) estimates are obtained. MMSE estimates are nonlinear functions of the observations. The problem of estimating the rate of a DTJP when the rate is a random variable with a probability density function of the form cx super K (l-x) super m and show that the MMSE estimates are linear in this case. This class of density functions explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  9. Optimal estimation for discrete time jump processes

    NASA Technical Reports Server (NTRS)

    Vaca, M. V.; Tretter, S. A.

    1978-01-01

    Optimum estimates of nonobservable random variables or random processes which influence the rate functions of a discrete time jump process (DTJP) are derived. The approach used is based on the a posteriori probability of a nonobservable event expressed in terms of the a priori probability of that event and of the sample function probability of the DTJP. Thus a general representation is obtained for optimum estimates, and recursive equations are derived for minimum mean-squared error (MMSE) estimates. In general, MMSE estimates are nonlinear functions of the observations. The problem is considered of estimating the rate of a DTJP when the rate is a random variable with a beta probability density function and the jump amplitudes are binomially distributed. It is shown that the MMSE estimates are linear. The class of beta density functions is rather rich and explains why there are insignificant differences between optimum unconstrained and linear MMSE estimates in a variety of problems.

  10. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  11. A Comparison of Maximum Likelihood and Bayesian Estimation for Polychoric Correlation Using Monte Carlo Simulation

    ERIC Educational Resources Information Center

    Choi, Jaehwa; Kim, Sunhee; Chen, Jinsong; Dannels, Sharon

    2011-01-01

    The purpose of this study is to compare the maximum likelihood (ML) and Bayesian estimation methods for polychoric correlation (PCC) under diverse conditions using a Monte Carlo simulation. Two new Bayesian estimates, maximum a posteriori (MAP) and expected a posteriori (EAP), are compared to ML, the classic solution, to estimate PCC. Different…

  12. Estimates of ocean forecast error covariance derived from Hessian Singular Vectors

    NASA Astrophysics Data System (ADS)

    Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.

    2015-05-01

    Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual

  13. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  14. An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy

    SciTech Connect

    Newman, Jennifer F.; Clifton, Andrew

    2016-08-01

    Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.

  15. Mass load estimation errors utilizing grab sampling strategies in a karst watershed

    USGS Publications Warehouse

    Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

    2003-01-01

    Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

  16. Density-preserving sampling: robust and efficient alternative to cross-validation for error estimation.

    PubMed

    Budka, Marcin; Gabrys, Bogdan

    2013-01-01

    Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers.

  17. Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration

    NASA Astrophysics Data System (ADS)

    Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.

    2014-12-01

    Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability

  18. Model error estimation in composite impact response prediction using hierarchical Bayes networks

    NASA Astrophysics Data System (ADS)

    Salas Mendez, Pablo Antonio

    Predicting the failure response of complex systems often requires computational models that can capture the nonlinear response of the material and structure across multiple scales. Typically, the output response is a direct result of the complex interactions of different phenomena at different scales of the hierarchical system. Therefore, computed model errors correspond to accumulated model errors that have been propagated across several levels of the system. The objective of the current work is to identify and quantify the errors introduced by computer analytical models at different scales in the ballistic impact response simulation of a composite laminate. To that end, a Bayesian network based framework was implemented to systematically estimate the model contribution of uncertainty to the response prediction at each sub-scale of the composite problem. The developed method can be used for optimal allocation of validation resources by determining the type and number of experimental tests needed to reduce uncertainty at different subsystems levels of large engineering systems.

  19. Effects of Estimation Bias on Multiple-Category Classification with an IRT-Based Adaptive Classification Procedure

    ERIC Educational Resources Information Center

    Yang, Xiangdong; Poggio, John C.; Glasnapp, Douglas R.

    2006-01-01

    The effects of five ability estimators, that is, maximum likelihood estimator, weighted likelihood estimator, maximum a posteriori, expected a posteriori, and Owen's sequential estimator, on the performances of the item response theory-based adaptive classification procedure on multiple categories were studied via simulations. The following…

  20. Relative Precision of Ability Estimation in Polytomous CAT: A Comparison under the Generalized Partial Credit Model and Graded Response Model.

    ERIC Educational Resources Information Center

    Wang, Shudong; Wang, Tianyou

    The purpose of this Monte Carlo study was to evaluate the relative accuracy of T. Warm's weighted likelihood estimate (WLE) compared to maximum likelihood estimate (MLE), expected a posteriori estimate (EAP), and maximum a posteriori estimate (MAP), using the generalized partial credit model (GPCM) and graded response model (GRM) under a variety…