Sample records for estimated systematic error

  1. Statistical errors in Monte Carlo estimates of systematic errors

    NASA Astrophysics Data System (ADS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  2. Robust estimation of systematic errors of satellite laser range

    Microsoft Academic Search

    Y. Yang; M. K. Cheng; C. K. Shum; B. D. Tapley

    1999-01-01

    .   Methods for analyzing laser-ranging residuals to estimate station-dependent systematic errors and to eliminate outliers in\\u000a satellite laser ranges are discussed. A robust estimator based on an M-estimation principle is introduced. A practical calculation\\u000a procedure which provides a robust criterion with high breakdown point and produces robust initial residuals for following\\u000a iterative robust estimation is presented. Comparison of the results

  3. Systematic Errors in the Estimation of Black Hole Masses by Reverberation Mapping

    E-print Network

    Julian H. Krolik

    2000-12-06

    The mass of the central black hole in many active galactic nuclei has been estimated on the basis of the assumption that the dynamics of the broad emission line gas are dominated by the gravity of the black hole. The most commonly-employed method is to estimate a characteristic size-scale $r_*$ from reverberation mapping experiments and combine it with a characteristic velocity $v_*$ taken from the line profiles; the inferred mass is then estimated by $r_* v_*^2/G$. We critically discuss the evidence supporting the assumption of gravitational dynamics and find that the arguments are still inconclusive. We then explore the range of possible systematic error if the assumption of gravitational dynamics is granted. Inclination relative to a flattened system may cause a systematic underestimate of the central mass by a factor $\\sim (h/r)^2$, where $h/r$ is the aspect ratio of the flattening. The coupled effects of a broad radial emissivity distribution, an unknown angular radiation pattern of line emission, and sub-optimal sampling in the reverberation experiment can cause additional systematic errors as large as a factor of 3 or more in either direction.

  4. GPS meteorology: Reducing systematic errors in geodetic estimates for zenith delay

    Microsoft Academic Search

    Peng Fang; Michael Bevis; Yehuda Bock; Seth Gutman; Dan Wolfe

    1998-01-01

    Differences between long term precipitable water (PW) time series derived from radiosondes, microwave water vapor radiometers, and GPS stations reveal offsets that are often as much as 1-2 mm PW. All three techniques are thought to suffer from systematic errors of order 1 mm PW. Standard GPS processing algorithms are known to be sensitive to the choice of elevation cutoff

  5. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    NASA Astrophysics Data System (ADS)

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphaël; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngolé Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stéphane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-07-01

    We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ˜1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  6. Systematic Errors in Cosmic Microwave Background Interferometry

    E-print Network

    Bunn, E F

    2006-01-01

    Cosmic microwave background (CMB) polarization observations will require superb control of systematic errors in order to achieve their full scientific potential, particularly in the case of attempts to detect the B modes that may provide a window on inflation. Interferometry may be a promising way to achieve these goals. This paper presents a formalism for characterizing the effects of a variety of systematic errors on interferometric CMB polarization observations, with particular emphasis on estimates of the B-mode power spectrum. The most severe errors are those that couple the temperature anisotropy signal to polarization; such errors include cross-talk within detectors, misalignment of polarizers, and cross-polarization. In a B mode experiment, the next most serious category of errors are those that mix E and B modes, such as gain fluctuations, pointing errors, and beam shape errors. The paper also indicates which sources of error may cause circular polarization (e.g., from foregrounds) to contaminate the...

  7. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 4, OCTOBER 2006 541 Estimation of Systematic Errors of MODIS

    E-print Network

    Liang, Shunlin

    of Systematic Errors of MODIS Thermal Infrared Bands Ronggao G. Liu, Jiyuan Y. Liu, and Shunlin Liang, Senior error in Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared (TIR) Bands 20­25 and 27­36. There exist scan-to-scan overlapped pixels in MODIS data. By analyzing a sufficiently large amount of those

  8. Systematic Errors in Cosmic Microwave Background Interferometry

    E-print Network

    Emory F. Bunn

    2006-07-13

    Cosmic microwave background (CMB) polarization observations will require superb control of systematic errors in order to achieve their full scientific potential, particularly in the case of attempts to detect the B modes that may provide a window on inflation. Interferometry may be a promising way to achieve these goals. This paper presents a formalism for characterizing the effects of a variety of systematic errors on interferometric CMB polarization observations, with particular emphasis on estimates of the B-mode power spectrum. The most severe errors are those that couple the temperature anisotropy signal to polarization; such errors include cross-talk within detectors, misalignment of polarizers, and cross-polarization. In a B mode experiment, the next most serious category of errors are those that mix E and B modes, such as gain fluctuations, pointing errors, and beam shape errors. The paper also indicates which sources of error may cause circular polarization (e.g., from foregrounds) to contaminate the cosmologically interesting linear polarization channels, and conversely whether monitoring of the circular polarization channels may yield useful information about the errors themselves. For all the sources of error considered, estimates of the level of control that will be required for both E and B mode experiments are provided. Both experiments that interfere linear polarizations and those that interfere circular polarizations are considered. The fact that circular experiments simultaneously measure both linear polarization Stokes parameters in each baseline mitigates some sources of error.

  9. Systematic errors in cosmic microwave background interferometry

    SciTech Connect

    Bunn, Emory F. [Physics Department, University of Richmond, Richmond, Virginia 23173 (United States)

    2007-04-15

    Cosmic microwave background (CMB) polarization observations will require superb control of systematic errors in order to achieve their full scientific potential, particularly in the case of attempts to detect the B modes that may provide a window on inflation. Interferometry may be a promising way to achieve these goals. This paper presents a formalism for characterizing the effects of a variety of systematic errors on interferometric CMB polarization observations, with particular emphasis on estimates of the B-mode power spectrum. The most severe errors are those that couple the temperature anisotropy signal to polarization; such errors include cross talk within detectors, misalignment of polarizers, and cross polarization. In a B mode experiment, the next most serious category of errors are those that mix E and B modes, such as gain fluctuations, pointing errors, and beam shape errors. The paper also indicates which sources of error may cause circular polarization (e.g., from foregrounds) to contaminate the cosmologically interesting linear polarization channels, and conversely whether monitoring of the circular-polarization channels may yield useful information about the errors themselves. For all the sources of error considered, estimates of the level of control that will be required for both E and B mode experiments are provided. Simulations of a mock experiment are presented to illustrate the results. Both experiments that interfere linear polarizations and those that interfere circular polarizations are considered. The fact that circular experiments simultaneously measure both linear polarization Stokes parameters in each baseline mitigates some sources of error.

  10. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  11. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  12. Estimating GPS Positional Error

    NSDL National Science Digital Library

    Bill Witte

    After instructing students on basic receiver operation, each student will make many (10-20) position estimates of 3 benchmarks over a week. The different benchmarks will have different views of the skies or vegetation cover. Each student will download their data into a spreadsheet and calculate horizontal and vertical errors which are collated into a class spreadsheet. The positions are sorted by error and plotted in a cumulative frequency plot. The students are encouraged to discuss the distribution, sources of error, and estimate confidence intervals. This exercise gives the students a gut feeling for confidence intervals and the accuracy of data. Students are asked to compare results from different types of data and benchmarks with different views of the sky. Uses online and/or real-time data Has minimal/no quantitative component
    Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions

  13. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    SciTech Connect

    Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Rd. WT15, Richardson, Texas 75080 (United States)

    2009-05-15

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

  14. Systematic and statistical errors in a Bayesian approach to the estimation of the neutron-star equation of state using advanced gravitational wave detectors

    NASA Astrophysics Data System (ADS)

    Wade, Leslie; Creighton, Jolien D. E.; Ochsner, Evan; Lackey, Benjamin D.; Farr, Benjamin F.; Littenberg, Tyson B.; Raymond, Vivien

    2014-05-01

    Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.

  15. Estimating the Odometry Error of a Mobile Robot during Navigation

    Microsoft Academic Search

    Agostino Martinelli; Roland Siegwart

    This paper addresses the problem of the odometry error estimation during the robot navigation. The robot is equipped with an external sensor (like laser range finder). Concerning the systematic error an augmented Kalman Filter is introduced. This filter estimates a vector state containing the robot configuration and the parameters characterizing the systematic component of the odometry error. It uses encoder

  16. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  17. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    Microsoft Academic Search

    J. H. Klenzing; G. D. Earle; R. A. Heelis; W. R. Coley

    2009-01-01

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications\\/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously

  18. Estimating and Correcting Global Weather Model Error

    Microsoft Academic Search

    Christopher M. Danforth; Eugenia Kalnay; Takemasa Miyoshi

    2007-01-01

    The purpose of the present study is to explore the feasibility of estimating and correcting systematic model errors using a simple and efficient procedure, inspired by papers by Leith as well as DelSole and Hou, that could be applied operationally, and to compare the impact of correcting the model integration with statistical corrections performed a posteriori. An elementary data assimilation

  19. Systematic errors in cosmic microwave background interferometry

    Microsoft Academic Search

    Bunn; Emory F

    2007-01-01

    Cosmic microwave background (CMB) polarization observations will require superb control of systematic errors in order to achieve their full scientific potential, particularly in the case of attempts to detect the B modes that may provide a window on inflation. Interferometry may be a promising way to achieve these goals. This paper presents a formalism for characterizing the effects of a

  20. Systematic reviews, systematic error and the acquisition of clinical knowledge

    PubMed Central

    2010-01-01

    Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172

  1. Compensating for systematic errors in 5-axis NC machining

    Microsoft Academic Search

    Erik L. J. Bohez

    2002-01-01

    The errors introduced during 5-axis machining are higher than the intrinsic repeatability of the machine tool. It is possible to identify such systematic errors and compensate for them, thus achieving higher performance. A group of systematic errors can be compensated for directly in the inverse kinematics equations. Other systematic errors can be combined and compensated for through the total differentials

  2. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  3. Systematic errors in cosmic microwave background interferometry

    Microsoft Academic Search

    Emory F. Bunn

    2007-01-01

    Cosmic microwave background (CMB) polarization observations will require\\u000asuperb control of systematic errors in order to achieve their full scientific\\u000apotential, particularly in the case of attempts to detect the B modes that may\\u000aprovide a window on inflation. Interferometry may be a promising way to achieve\\u000athese goals. This paper presents a formalism for characterizing the effects of\\u000aa

  4. Systematic Errors in measurement of b1

    SciTech Connect

    Wood, S A

    2014-10-27

    A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b(1) structure function at Jefferson Lab.

  5. Reducing Systematic Error in Weak Lensing Cluster Surveys

    NASA Astrophysics Data System (ADS)

    Utsumi, Yousuke; Miyazaki, Satoshi; Geller, Margaret J.; Dell'Antonio, Ian P.; Oguri, Masamune; Kurtz, Michael J.; Hamana, Takashi; Fabricant, Daniel G.

    2014-05-01

    Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing ?-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ~3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ~2000 peaks based on our Subaru fields. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.

  6. Significance in gamma ray astronomy with systematic errors

    NASA Astrophysics Data System (ADS)

    Spengler, Gerrit

    2015-07-01

    The influence of systematic errors on the calculation of the statistical significance of a ? -ray signal with the frequently invoked Li and Ma method is investigated. A simple criterion is derived to decide whether the Li and Ma method can be applied in the presence of systematic errors. An alternative method is discussed for cases where systematic errors are too large for the application of the original Li and Ma method. This alternative method reduces to the Li and Ma method when systematic errors are negligible. Finally, it is shown that the consideration of systematic errors will be important in many analyses of data from the planned Cherenkov Telescope Array.

  7. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  8. Correction of systematic odometry errors in mobile robots

    Microsoft Academic Search

    Johann Borenstein; Liqiang Feng

    1995-01-01

    This paper describes a practical method for reducing odometry errors caused by kinematic imperfections of a mobile robot. These errors, here referred to as “systematicerrors, stay almost constant over a prolonged period of time. Performing an occasional calibration as described here will increase the robot's odometric accuracy and reduce operation cost because an accurate mobile robot requires fewer absolute

  9. Systematic Error in Computation of Free Energy Changes

    NASA Astrophysics Data System (ADS)

    Zuckerman, Daniel; Woolf, Thomas

    2003-03-01

    Systematic inaccuracy is inherent in any computational estimate of a non-linear average, due to the availability of only a finite number of data values, N. Free energy differences DF between two states or systems are critically important examples of such averages in physical, chemical and biological settings --- including the biomolecular problems of protein engineering and drug design. Previous work has demonstrated, empirically, that the ``finite-sampling error'' can be very large --- many times kT --- in DF estimates for simple molecular systems. The present contribution presents a theoretical description of the inaccuracy, including rigorous new bounds, an asymptotic analysis and the identifcation of a universal law. The asymptotic theory relies on corrections to the central and other limit theorems, and thus a role is played by stable (Levy) probability distributions. [Phys. Rev. Lett. 89, 180602 (2002)

  10. Optimal convex error estimators for classification

    Microsoft Academic Search

    Chao Sima; Edward R. Dougherty

    A cross-validation error estimator is obtained by repeatedly leaving out some data points, deriving classifiers on the remaining points, computing errors for these classifiers on the left-out points, and then averaging these errors. The 0.632 bootstrap estimator is obtained by averaging the errors of classifiers designed from points drawn with replacement and then taking a convex combination of this \\

  11. Systematic errors in current quantum state tomography tools

    E-print Network

    Christian Schwemmer; Lukas Knips; Daniel Richart; Tobias Moroder; Matthias Kleinmann; Otfried Gühne; Harald Weinfurter

    2014-07-22

    Common tools for obtaining physical density matrices in experimental quantum state tomography are shown here to cause systematic errors. For example, using maximum likelihood or least squares optimization for state reconstruction, we observe a systematic underestimation of the fidelity and an overestimation of entanglement. A solution for this problem can be achieved by a linear evaluation of the data yielding reliable and computational simple bounds including error bars.

  12. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  13. Systematic reviews, systematic error and the acquisition of clinical knowledge

    Microsoft Academic Search

    Steffen Mickenautsch

    2010-01-01

    BACKGROUND: Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types

  14. Estimated Errors in |Vcd|/|Vcs| from semileptonic D decays

    SciTech Connect

    J.N.Simone, A.X.El-Khadra, S.Hashimoto, A.S.Kronfeld, P.B.Mackenzie, and S.M.Ryan

    1998-11-01

    We estimate statistical and systematic errors in the extraction of the CKM ratio {vert_bar}V{sub cd}{vert_bar}/{vert_bar}v{sub cs}{vert_bar} from exclusive D-meson semileptonic decays using lattice QCD and anticipated new experimental results.

  15. Some Error Estimates for the Box Method

    Microsoft Academic Search

    Randolph E. Bank; Donald J. Rose

    1987-01-01

    . We define and analyze several variants of the box method for discretizing ellipticboundary value problems in the plane. Our estimates show the error to be comparable to a standardGalerkin finite element method using piecewise linear polynomials.Key words. box method, error bounds, piecewise linear triangular finite elementsAMS subject classifications. 65M15, 65M601. Introduction. In this work we derive some error estimates

  16. Correction of Systematic Odometry Error with Road Surface Environment Map Taichi YAMADA (Univ. of Tsukuba), and Akihisa OHYA (Univ. of Tsukuba)

    E-print Network

    Ohya, Akihisa

    - - ( ), ( ) Correction of Systematic Odometry Error with Road Surface Environment Map Taichi for correction of systematic odometry error. It use a road surface environment map that describes odometry error on a route. In autonomous running, robot estimate odometry error with this map and correct position

  17. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  18. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  19. Neutrino spectrum at the far detector systematic errors

    SciTech Connect

    Szleper, M.; Para, A.

    2001-10-01

    Neutrino oscillation experiments often employ two identical detectors to minimize errors due to inadequately known neutrino beam. We examine various systematics effects related to the prediction of the neutrino spectrum in the `far' detector on the basis of the spectrum observed at the `near' detector. We propose a novel method of the derivation of the far detector spectrum. This method is less sensitive to the details of the understanding of the neutrino beam line and the hadron production spectra than the usually used `double ratio' method thus allowing to reduce the systematic errors.

  20. Systematic capacitance matching errors and corrective layout procedures

    Microsoft Academic Search

    M. J. McNutt; S. LeMarquis; J. L. Dunkley

    1994-01-01

    Precise capacitor ratios are employed in a variety of analog and mixed signal integrated circuits. The use of identical unit capacitors to form larger capacitances can easily produce 1% accuracy, but, in many cases, 0.1% accuracy can provide important performance advantages. Unfortunately, the ultimate matching precision of the ratio is limited by a number of systematic and random error sources.

  1. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  2. Estimation of error bounds in geoacoustic inversions

    NASA Astrophysics Data System (ADS)

    Sanders, William

    2003-04-01

    Geoacoustic inversion has been shown to yield accurate estimates of effective ocean bottom parameters for simple environments when high fidelity models are used for both environmental deconstruction and signal propagation. But as the environment becomes more complex, some parameters are estimated with less certainty than others. Uncertainty, as represented by errors in inverse problems stem from measurement inaccuracies, model imperfections and prior assumptions. Whereas the propagation of errors in inverse problems is not generally possible, the problem becomes tractable if the forward equation can be linearized about some (preferably the true) set of values and, if further, all errors are assumed Gaussian. Under these assumptions, the covariances of the a posteriori errors can be formulated, thus providing bounds on the uncertainty resulting from the inverse process. These are ultimately expressed in terms of bounds on measurement errors, modeling errors and the linearization of the forward model. This effort analyzes the errors involved in some well understood and benchmarked cases and compares results to other published analyses. This is done by utilizing a parabolic equation (PE) propagation model. Derivatives of the field with respect to the environmental variables are derived in order to calculate the error bounds.

  3. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  4. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  5. Error Resilient Motion Estimation for Video Coding

    Microsoft Academic Search

    Wen-nung Lie; Zhi-wei Gao; Wei-chih Chen; Ping-chang Jui

    2006-01-01

    \\u000a Due to temporal prediction adopted in most video coding standards, errors occurred in the current frame will propagate to\\u000a succeeding frames that refer to it. This causes substantial degradation in reconstructed video quality at decoder side. In\\u000a order to enhance robustness of existing temporal prediction techniques, another prediction strategy, called error resilient\\u000a motion estimation (ERME), to take both coding efficiency

  6. ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES

    SciTech Connect

    Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)

    2012-05-20

    In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

  7. Geodesy by radio interferometry: Effects of atmospheric modeling errors on estimates of baseline length

    Microsoft Academic Search

    J. L. Davis; T. A. Herrinch; I. I. Shapiro; A. E. E. Rollers; G. Elgered

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for  8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere (\\

  8. Effects of systematic sampling on satellite estimates of deforestation rates

    NASA Astrophysics Data System (ADS)

    Steininger, M. K.; Godoy, F.; Harper, G.

    2009-09-01

    Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1° intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1°, 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25° produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the sample size is very large, especially if any sub-national stratification of estimates is required.

  9. Minimum Model Error Approach for Attitude Estimation

    Microsoft Academic Search

    John L. Crassidis; F. Landis Markley

    1997-01-01

    In this paper, an optimal batch estimator and smoother based on the Minimum Model Error (MME) approach is developed for three-axis stabilized spacecraft. The formulation described in this paper is shown using only attitude sensors (e.g., three-axis magnetometers, sun sensors, star trackers, etc). This algorithm accurately estimates the attitude of a spacecraft, and substantially smoothes noise associated with attitude sensor

  10. Error estimations of 3D digital image correlation measurements

    NASA Astrophysics Data System (ADS)

    Becker, Thomas; Splitthof, Karsten; Siebert, Thorsten; Kletting, Peter

    2006-08-01

    Systematical errors of digital image correlation (DIC) measurements build a limiting factor for the accuracy of the resulting quantities. A major source for introducing systematical errors is the system calibration. We present a 3D digital image correlation system, which provides error information not only of diverse error sources but even more the propagation of errors throughout the calculations to the resulting contours, displacements and strains. On the basis of this system we discuss error sources, error propagation and the impact on correlation results. Performance tests for studying the impact of calibration errors on the resulting data are shown.

  11. General Deming Regression for Estimating Systematic Bias and Its Confidence Interval in Method-Comparison Studies

    Microsoft Academic Search

    Robert F. Martin

    2000-01-01

    Background: Various forms of least-squares regression analyses are used to estimate average systematic error (bias) and its confidence interval in method-comparison studies. When assumptions that underlie a particular regression method are inappropriate for the data, errors in estimated statistics result. In this report, I present an improved method for regression analysis that is free from the usual simplifying assumptions and

  12. Galaxy assembly bias: a significant source of systematic error in the galaxy-halo relationship

    NASA Astrophysics Data System (ADS)

    Zentner, Andrew R.; Hearin, Andrew P.; van den Bosch, Frank C.

    2014-10-01

    Methods that exploit galaxy clustering to constrain the galaxy-halo relationship, such as the halo occupation distribution (HOD) and conditional luminosity function (CLF), assume halo mass alone suffices to determine a halo's galaxy content. Yet, halo clustering strength depends upon properties other than mass, such as formation time, an effect known as assembly bias. If galaxy characteristics are correlated with these auxiliary halo properties, the basic assumption of standard HOD/CLF methods is violated. We estimate the potential for assembly bias to induce systematic errors in inferred halo occupation statistics. We construct realistic mock galaxy catalogues that exhibit assembly bias as well as companion mock catalogues with identical HODs, but with assembly bias removed. We fit HODs to the galaxy clustering in each catalogue. In the absence of assembly bias, the inferred HODs describe the true HODs well, validating the methodology. However, in all cases with assembly bias, the inferred HODs exhibit significant systematic errors. We conclude that the galaxy-halo relationship inferred from galaxy clustering is subject to significant systematic errors induced by assembly bias. Efforts to model and/or constrain assembly bias should be priorities as assembly bias is a threatening source of systematic error in galaxy evolution and precision cosmology studies.

  13. Pointwise error estimates for a streamline diffusion finite element scheme

    Microsoft Academic Search

    Koichi Niijima

    1989-01-01

    Summary Pointwise error estimates for a streamline diffusion scheme for solving a model convection-dominated singularly perturbed convection-diffusion problem are given. These estimates improve pointwise error estimates obtained by Johnson et al.[5].

  14. Density Estimation Framework for Model Error Assessment

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.

    2014-12-01

    In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.

  15. An unbiased estimator of peculiar velocity with Gaussian distributed errors for precision cosmology

    NASA Astrophysics Data System (ADS)

    Watkins, Richard; Feldman, Hume A.

    2015-06-01

    We introduce a new estimator of the peculiar velocity of a galaxy or group of galaxies from redshift and distance estimates. This estimator results in peculiar velocity estimates which are statistically unbiased and have Gaussian distributed errors, thus complying with the assumptions of analyses that rely on individual peculiar velocities. We apply this estimator to the SFI++ and the Cosmicflows-2 catalogues of galaxy distances and, since peculiar velocity estimates of distant galaxies are error dominated, examine their error distributions. The adoption of the new estimator significantly improves the accuracy and validity of studies of the large-scale peculiar velocity field that assume Gaussian distributed velocity errors and eliminates potential systematic biases, thus helping to bring peculiar velocity analysis into the era of precision cosmology. In addition, our method of examining the distribution of velocity errors should provide a useful check of the statistics of large peculiar velocity catalogues, particularly those that are compiled out of data from multiple sources.

  16. Drawdown Estimation and Quantification of Error

    NASA Astrophysics Data System (ADS)

    Halford, K. J.

    2005-12-01

    Drawdowns during aquifer tests can be obscured by barometric pressure changes, tides, regional pumping, and recharge events in the water-level record. These natural and man-induced stresses can create water-level fluctuations that must be removed from observed water levels prior to accurately estimate pumping-induced drawdowns. Simple models have been developed to estimate non-pumping water-levels during aquifer tests. Together these models produce what is referred to here as the synthetic water level. The synthetic water level is the sum of individual time-series models of barometric pressure, tidal potential, or background water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function in an Excel spreadsheet. The root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods.

  17. Systematic errors in free energy perturbation calculations due to a finite sample of configuration space: Sample-size hysteresis

    SciTech Connect

    Wood, R.H.; Muehlbauer, W.C.F. (Univ. of Delaware, Newark (United States)); Thompson, P.T. (Swarthmore Coll., PA (United States))

    1991-08-22

    Although the free energy perturbation procedure is exact when an infinite sample of configuration space is used, for finite sample size there is a systematic error resulting in hysteresis for forward and backward simulations. The qualitative behavior of this systematic error is first explored for a Gaussian distribution, then a first-order estimate of the error for any distribution is derived. To first order the error depends only on the fluctuations in the sample of potential energies, {Delta}E, and the sample size, n, but not on the magnitude of {Delta}E. The first-order estimate of the systematic sample-size error is used to compare the efficiencies of various computing strategies. It is found that slow-growth, free energy perturbation calculations will always have lower errors from this source than window-growth, free energy perturbation calculations for the same computing effort. The systematic sample-size errors can be entirely eliminated by going to thermodynamic integration rather than free energy perturbation calculations. When {Delta}E is a very smooth function of the coupling parameter, {lambda}, thermodynamic integration with a relatively small number of windows is the recommended procedure because the time required for equilibration is reduced with a small number of windows. These results give a method of estimating this sample-size hysteresis during the course of a slow-growth, free energy perturbation run. This is important because in these calculations time-lag and sample-size errors can cancel, so that separate methods of estimating and correcting for each are needed. When dynamically modified window procedures are used, it is recommended that the estimated sample-size error be kept constant, not that the magnitude of {Delta}E be kept constant. Tests on two systems showed a rather small sample-size hysteresis in slow-growth calculations except in the first stages of creating a particle, where both fluctuations and sample-size hysteresis are large.

  18. Effects of parameter error on Cooper-Jacob drawdown estimates

    NASA Astrophysics Data System (ADS)

    Edwards, David A.

    2010-08-01

    Parameters employed in the Cooper-Jacob equation to describe drawdown are transmissivity, storativity, radial distance, time and pumping rate. An approach is described for quantifying how error or uncertainty in any one of the parameters used causes error in estimated drawdown. Dimensionless fractional error in estimated drawdown is expressed quantitatively as a function of (1) dimensionless fractional error of a given parameter, and (2) dimensionless argument of the well function, u. Fractional error in estimated drawdown is a linear function of fractional error in pumping rate and, for any given value of u, a nonlinear function of fractional error in transmissivity, storativity, radial distance or time. Fractional error in estimated drawdown for a given fractional parameter error varies considerably between parameters. The greatest sensitivity is for transmissivity and flow rate. Sensitivity is less for radial distance and time, and even less for storativity. The magnitude of the fractional error in drawdown may be affected by the sign of the fractional parameter error.

  19. Performance of optimum combining with channel estimation errors

    Microsoft Academic Search

    Balkan Kecicioglu; Murat Torlak; Adnan Kavak

    2005-01-01

    óOptimum combining (OC) is an effective way of suppressing interference in receive antenna diversity systems. In this paper, we examine the effect of channel estimation error on bit error probability (BEP) performance of optimum combining with BPSK modulation. The nal expression is dependent on channel estimation error vari- ance. Therefore, our analysis is independent of any specic estimation scheme. First,

  20. Standard errors of parameter estimates in the ETAS model

    E-print Network

    Schoenberg, Frederic Paik (Rick)

    1 Standard errors of parameter estimates in the ETAS model Abstract Point process models model. The conventional standard error estimates based on the Hessian are shown not to be accurate when). The standard errors of parameter estimates in the ETAS model are thus very important in determining

  1. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  2. Target parameter and error estimation using magnetometry

    NASA Astrophysics Data System (ADS)

    Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.

    The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.

  3. Systematic Error in UAV-derived Topographic Models: The Importance of Control

    NASA Astrophysics Data System (ADS)

    James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.

    2014-12-01

    UAVs equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs) for a wide variety of geoscience applications. Image processing and DEM-generation is being facilitated by parallel increases in the use of software based on 'structure from motion' algorithms. However, recent work [1] has demonstrated that image networks from UAVs, for which camera pointing directions are generally near-parallel, are susceptible to producing systematic error in the resulting topographic surfaces (a vertical 'doming'). This issue primarily reflects error in the camera lens distortion model, which is dominated by the radial K1 term. Common data processing scenarios, in which self-calibration is used to refine the camera model within the bundle adjustment, can inherently result in such systematic error via poor K1 estimates. Incorporating oblique imagery into such data sets can mitigate error by enabling more accurate calculation of camera parameters [1]. Here, using a combination of simulated image networks and real imagery collected from a fixed wing UAV, we explore the additional roles of external ground control and the precision of image measurements. We illustrate similarities and differences between a variety of structure from motion software, and underscore the importance of well distributed and suitably accurate control for projects where a demonstrated high accuracy is required. [1] James & Robson (2014) Earth Surf. Proc. Landforms, 39, 1413-1420, doi: 10.1002/esp.3609

  4. Rigorous Error Estimates for Reynolds' Lubrication Approximation

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon

    2006-11-01

    Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ? approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

  5. Systematic Review of the Balance Error Scoring System

    PubMed Central

    Bell, David R.; Guskiewicz, Kevin M.; Clark, Micheal A.; Padua, Darin A.

    2011-01-01

    Context: The Balance Error Scoring System (BESS) is commonly used by researchers and clinicians to evaluate balance.A growing number of studies are using the BESS as an outcome measure beyond the scope of its original purpose. Objective: To provide an objective systematic review of the reliability and validity of the BESS. Data Sources: PubMed and CINHAL were searched using Balance Error Scoring System from January 1999 through December 2010. Study Selection: Selection was based on establishment of the reliability and validity of the BESS. Research articles were selected if they established reliability or validity (criterion related or construct) of the BESS, were written in English, and used the BESS as an outcome measure. Abstracts were not considered. Results: Reliability of the total BESS score and individual stances ranged from poor to moderate to good, depending on the type of reliability assessed. The BESS has criterion-related validity with force plate measures; more difficult stances have higher agreement than do easier ones. The BESS is valid to detect balance deficits where large differences exist (concussion or fatigue). It may not be valid when differences are more subtle. Conclusions: Overall, the BESS has moderate to good reliability to assess static balance. Low levels of reliability have been reported by some authors. The BESS correlates with other measures of balance using testing devices. The BESS can detect balance deficits in participants with concussion and fatigue. BESS scores increase with age and with ankle instability and external ankle bracing. BESS scores improve after training. PMID:23016020

  6. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  7. Odometry Error Covariance Estimation for Two Wheel Robot Vehicles

    Microsoft Academic Search

    Lindsay KLEEMAN

    1995-01-01

    This technical report develops a simple statistical error model for estimating position and orientation of a mobile robot using odometry. Once the errors are characterised, other sensor data can be combined sensibly in the estimation of position, using the Extended Kalman Filter (Kleeman, 1992 #100; Jazwinski, 1970 #117). A closed form error covariance matrix is developed for (i) straight lines

  8. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  9. ERROR ESTIMATION AND ADAPTIVITY FOR NONLINEAR FE ANALYSIS

    Microsoft Academic Search

    ANTONIO HUERTA; ANTONIO RODRÍGUEZ-FERRAN; PEDRO DÍEZ

    2002-01-01

    An adaptive strategy for nonlinear finite-element analysis, based on the combination of error estimation and h-remeshing, is presented. Its two main ingredients are a residual-type error estimator and an unstructured quadrilateral mesh generator. The error estimator is based on simple local computations over the elements and the so-called patches. In contrast to other residual estimators, no flux splitting is required.

  10. Systematic Errors in GNSS Radio Occultation Data - Part 2

    NASA Astrophysics Data System (ADS)

    Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

    2014-05-01

    The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows to compute sensitivities to changes in atmospheric composition, where we found that the effect of the CO2 increase is currently almost exactly balanced by the counteracting effect of the concurrent O2 decrease.

  11. Reducing Model Systematic Error over Tropical Pacific through SUMO Approach

    NASA Astrophysics Data System (ADS)

    Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory

    2014-05-01

    Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. We take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differ only in their convection scheme. As climate models show large sensitivity to convection schemes, this approach may be a good basis for constructing a super model. We performed experiments with a machine learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Furthermore, the model with optimal coefficients has not only good performance over the surface temperature and precipitation, but also the positive Bjerknes feedback and the negative heat flux feedback match observations/reanalysis well, leading to a substantially improved simulation of ENSO.

  12. Reducing Model Systematic Error through Super Modelling - The Tropical Pacific

    NASA Astrophysics Data System (ADS)

    Shen, M.; Keenlyside, N. S.; Selten, F.; Wiegerinck, W.; Duane, G. S.

    2013-12-01

    Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. The general approach to tackle uncertainty is to use an ensemble of several different GCMs. However, ensemble results may smear out major variability, such as the ENSO. Here we take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differ only in their convection scheme. As climate models show large sensitivity to convection schemes, this approach may be a good basis for constructing a super model. We performed experiments with a machine learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Furthermore, the model with optimal coefficients has not only good performance over the surface temperature and precipitation, but also the positive Bjerknes feedback and the negative heat flux feedback match observations/reanalysis well, leading to a substantially improved simulation of ENSO.

  13. Time-harmonic mesh adaption with error estimate based on the “local field error” approach

    Microsoft Academic Search

    Piergiorgio Alotto; Paola Girdinio; Paolo Molfino; Mario Nervi

    1997-01-01

    An implementation of a mesh adaption procedure for 2D “time-harmonic” solution, based on “h-refinement” criteria, using as error estimate the “local field error” approach, is presented. The error estimate is based on the solution, on an “element-by-element” basis, of an adjoint error problem cast in terms of magnetic induction. The “self-adapting” meshing procedure is iterative and is composed of a

  14. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  15. Estimating IMU heading error from SAR images.

    SciTech Connect

    Doerry, Armin Walter

    2009-03-01

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  16. Estimation of Model Error Variances During Data Assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick

    2003-01-01

    Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data assimilation system.

  17. Modeling Radar Rainfall Estimation Uncertainties: Random Error Model

    E-print Network

    AghaKouchak, Amir

    Modeling Radar Rainfall Estimation Uncertainties: Random Error Model A. AghaKouchak1 ; E. Habib2 ; and A. Bárdossy3 Abstract: Precipitation is a major input in hydrological models. Radar rainfall data obtained form reflectivity patterns are subject to various errors such as errors in reflectivity-rainfall Z

  18. NETRA: Interactive Display for Estimating Refractive Errors and Focal Range

    E-print Network

    Pamplona, Vitor F.

    We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive optical devices for automatic estimation of refractive correction exist, our goal is to ...

  19. Probabilistic state estimation in regimes of nonlinear error growth

    E-print Network

    Lawson, W. Gregory, 1975-

    2005-01-01

    State estimation, or data assimilation as it is often called, is a key component of numerical weather prediction (NWP). Nearly all implementable methods of state estimation suitable for NWP are forced to assume that errors ...

  20. Estimation of Error Variance from Smallest Ordered Contrasts

    Microsoft Academic Search

    M. B. Wilk; R. Gnanadesikan; Anne E. Freeny

    1963-01-01

    Tables are given to facilitate the maximum likelihood estimation of error variance using the M smallest squares of K single degree of freedom contrasts. Some statistical properties of the estimate based on a random sampling experiment are given.

  1. Errors in estimation of the input signal for integrate-and-fire neuronal models

    NASA Astrophysics Data System (ADS)

    Bibbona, Enrico; Lansky, Petr; Sacerdote, Laura; Sirovich, Roberta

    2008-07-01

    Estimation of the input parameters of stochastic (leaky) integrate-and-fire neuronal models is studied. It is shown that the presence of a firing threshold brings a systematic error to the estimation procedure. Analytical formulas for the bias are given for two models, the randomized random walk and the perfect integrator. For the third model considered, the leaky integrate-and-fire model, the study is performed by using Monte Carlo simulated trajectories. The bias is compared with other errors appearing during the estimation, and it is documented that the effect of the bias has to be taken into account in experimental studies.

  2. Errors in estimation of the input signal for integrate-and-fire neuronal models.

    PubMed

    Bibbona, Enrico; Lansky, Petr; Sacerdote, Laura; Sirovich, Roberta

    2008-07-01

    Estimation of the input parameters of stochastic (leaky) integrate-and-fire neuronal models is studied. It is shown that the presence of a firing threshold brings a systematic error to the estimation procedure. Analytical formulas for the bias are given for two models, the randomized random walk and the perfect integrator. For the third model considered, the leaky integrate-and-fire model, the study is performed by using Monte Carlo simulated trajectories. The bias is compared with other errors appearing during the estimation, and it is documented that the effect of the bias has to be taken into account in experimental studies. PMID:18763993

  3. CFD PPLTMG Using A Posteriori Error Estimates and Domain Decomposition

    E-print Network

    Holst, Michael J.

    examines two approaches for mesh adaptation, using combinations of a posteriori error estimates and domain problem, with mesh adaptation in each subdomain using the a posteri- ori local error estimator- tive mesh generation which allows for the use of ex- isting (sequential) adaptive mesh refinement

  4. On the error behavior in linear minimum variance estimation problems

    Microsoft Academic Search

    H. Sorenson

    1967-01-01

    For linear systems the error covariance matrix for the unbiased, minimum variance estimate of the state does not depend upon any specific realization of the measurement sequence. Thus it can be examined to determine the expected behavior of the error in the estimate before actually using the filter in practice. In this paper, the general linear system that contains both

  5. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  6. POINTWISE ERROR ESTIMATES FOR RELAXATION APPROXIMATIONS TO CONSERVATION LAWS

    E-print Network

    Soatto, Stefano

    POINTWISE ERROR ESTIMATES FOR RELAXATION APPROXIMATIONS TO CONSERVATION LAWS EITAN TADMOR AND TAO that the maximum principle can be applied. Key words. conservation laws, error estimates, relaxation method@fisher.math.hkbu.edu.hk). 870 #12;RELAXATION APPROXIMATIONS TO CONSERVATION LAWS 871 dissipative mechanism for discontinuities

  7. Total Error in PES Estimates of Population

    Microsoft Academic Search

    Mary H. Mulry; Bruce D. Spencer

    1991-01-01

    We describe a methodology for estimating the accuracy of dual systems estimates (DSE's) of population, census estimates of population, and estimates of undercount in the census. The DSE's are based on the census and a post-enumeration survey (PES). We apply the methodology to the 1988 dress rehearsal census of St. Louis and east-central Missouri and we discuss its applicability to

  8. Error Estimation by Series Association for Neural Network Systems

    Microsoft Academic Search

    Keehoon Kim; Eric B. Bartlett

    1995-01-01

    Estimation of confidence intervals for neural network outputs is important when the uncertainty of a neural network system must be addressed for safety or reliability. This paper presents a new approach for estimating confidence intervals, which can help users validate neural network outputs. The estimation of confidence intervals, called error estimation by series association, is performed by a supplementary neural

  9. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    PubMed

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods. PMID:11377063

  10. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  11. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ?. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and ?>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', ?'>0, and ? > 0.

  12. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  13. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  14. Tolerable systematic errors in Really Large Hadron Collider dipoles

    SciTech Connect

    Peggs, S.; Dell, F.

    1996-12-01

    Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider.

  15. Bias in parameter estimation of form errors

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min

    2014-09-01

    The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.

  16. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  17. Recursive state estimation: Unknown but bounded errors and system inputs

    Microsoft Academic Search

    F. Schweppe

    1968-01-01

    A method is discussed for estimating the state of a linear dynamic system using noisy observations, when the input to the dynamic system and the observation errors are completely unknown except for bounds on their magnitude or energy. The state estimate is actually a set in state space rather than a single vector. The optimum estimate is the smallest calculable

  18. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K. [Department of Physics, University of Oslo, N-0316 Oslo (Norway); Krticka, M. [Institute of Particle and Nuclear Physics, Charles University, Prague (Czech Republic); Betak, E. [Institute of Physics SAS, 84511 Bratislava (Slovakia); Faculty of Philosophy and Science, Silesian University, 74601 Opava (Czech Republic); Schiller, A.; Voinov, A. V. [Department of Physics and Astronomy, Ohio University, Athens, Ohio 45701 (United States)

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  19. [Systematization of mistakes and errors of irradiation in radiotherapy].

    PubMed

    Va?nberg, M Sh

    1989-08-01

    The paper is devoted to a search for and scientific substantiation of practical measures to enhance the quality of irradiation in radiotherapy of cancer patients aimed at the reduction of the frequency of tumor recurrences, radiation reactions and complications. Inaccuracies, errors and faults in irradiation of cancer patients were analyzed and classified with regard to topometric, dosimetric, technical, technological and other features. PMID:2770447

  20. General solution for linearized systematic error propagation in vehicle odometry

    Microsoft Academic Search

    Alonzo Kelly

    2001-01-01

    Vehicle odometry is a nonlinear dynamical system in echelon form. Accordingly, a general solution can be written by solving the nonlinear equations in the correct order. Another implication of this structure is that a completely general solution to the linearized (perturbative) dynamics exists. The associated vector convolution integral is the general relationship between the output error and both the input

  1. Systematic Error of the Nose-to-Nose Sampling-Oscilloscope Calibration

    Microsoft Academic Search

    Dylan F. Williams; Tracy S. Clement; Kate A. Remley; Paul D. Hale; Frans Verbeyst

    2007-01-01

    We use traceable swept-sine and electrooptic-sampling-system-based sampling-oscilloscope calibrations to measure the systematic error of the nose-to-nose calibration, and compare the results to simulations. Our results show that the errors in the nose-to-nose calibration are small at low frequencies, but significant at high frequencies.

  2. FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST

    E-print Network

    Girod, Bernd

    FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST Zhi Li1 Protection, error control 1. INTRODUCTION Advances in video and networking technologies have made and lightning strikes. Depending on the duration, impulse noise can be put into three categories, namely

  3. Difference image analysis: The interplay between the photometric scale factor and systematic photometric errors

    NASA Astrophysics Data System (ADS)

    Bramich, D. M.; Bachelet, E.; Alsubai, K. A.; Mislis, D.; Parley, N.

    2015-05-01

    Context. Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA), and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.

  4. Estimation of scattering error in spectrophotometric measurements of light absorption

    E-print Network

    Stramski, Dariusz

    Estimation of scattering error in spectrophotometric measurements of light absorption by aquatic scattering error in measurements of light absorption by aquatic particles with a typical laboratory double function of particles. We applied this method to absorption mea- surements made on marine phytoplankton

  5. Short Communication DNA Sequence Error Rates in Genbank Records Estimated

    E-print Network

    Keightley, Peter

    variability such as humans (Li and Sadler, 1991), or in interspecific comparisons between closely related taxa element sequences derived from cloning artifacts in Genbank, and estimated the error rate in large databases. Krawetz (1989), however, reports an error rate of only 0.29% for GenBank on the basis

  6. Development of Single Grid Error Estimation Technique Based on Error Transport Equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail; Hu, Gusheng

    2002-11-01

    This paper is a further development of the previous publication (Celik & Hu 2002 (Celik, I., Hu, G., 2002, "Discretization Error Estimation Using Error Transport Equation," Proceedings of ASME FEDSM, paper no. 31372))on the topic of discretization error estimation using error equation technique on a single grid computation. The principal goal of this study is to develop a dynamic algorithm that can be used in conjunction with CFD simulation codes to quantify the discretization error in a selected process variable. The focus is on fluid dynamics applications where the conservation equations are solved for primary variables such as velocity, temperature and concentration etc., using finite difference or finite volume approach. A transport equation for the error (referred to as error transport equation or ETE) was formulated; A generalized approach to derive the error source in the ETE is proposed based on the modified equation concept and the user access to the coefficient matrix used in the solver. This technique is further refined and more extensive examples are presented to elucidate problems encountered and provide possible remedies to improve error prediction.Comparison is always made with exact solution for the spatiotemporal evolution of the error distribution. The results are assessed with respect to those obtained from the classical Richardson extrapolation method. Overall, this method can be viewed as an emerging alternative to Richardson extrapolation.

  7. Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications 

    E-print Network

    Garcia, Tanya

    2012-10-19

    Many statistical models, like measurement error models, a general class of survival models, and a mixture data model with random censoring, are semiparametric where interest lies in estimating finite-dimensional parameters in the presence...

  8. Using doppler radar images to estimate aircraft navigational heading error

    DOEpatents

    Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

    2012-07-03

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  9. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.

  10. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.

  11. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  12. Optimal error regions for quantum state estimation

    E-print Network

    Jiangwei Shang; Hui Khoon Ng; Arun Sehrawat; Xikun Li; Berthold-Georg Englert

    2013-03-30

    Rather than point estimators, states of a quantum system that represent one's best guess for the given data, we consider optimal regions of estimators. As the natural counterpart of the popular maximum-likelihood point estimator, we introduce the maximum-likelihood region---the region of largest likelihood among all regions of the same size. Here, the size of a region is its prior probability. Another concept is the smallest credible region---the smallest region with pre-chosen posterior probability. For both optimization problems, the optimal region has constant likelihood on its boundary. We discuss criteria for assigning prior probabilities to regions, and illustrate the concepts and methods with several examples.

  13. Target parameter and error estimation using magnetometry

    Microsoft Academic Search

    S. J. Norton; A. J. Witten; I. J. Won; D. Taylor

    1994-01-01

    The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are

  14. PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG

    SciTech Connect

    Mighell, Kenneth J. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Plavchan, Peter [NASA Exoplanet Science Institute, California Institute of Technology, Pasadena, CA 91125 (United States)

    2013-06-15

    The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

  15. Period Error Estimation for the Kepler Eclipsing Binary Catalog

    NASA Astrophysics Data System (ADS)

    Mighell, Kenneth J.; Plavchan, Peter

    2013-06-01

    The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg2 Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log ? P ? - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods >=62.5 days have KEBC period errors of ~0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

  16. Systematic errors in extracting nucleon properties from lattice QCD

    E-print Network

    Stefano Capitani; Michele Della Morte; Bastian Knippschild; Hartmut Wittig

    2010-11-05

    Form factors of the nucleon have been extracted from experiment with high precision. However, lattice calculations have failed so far to reproduce the observed dependence of form factors on the momentum transfer. We have embarked on a program to thoroughly investigate systematic effects in lattice calculation of the required three-point correlation functions. Here we focus on the possible contamination from higher excited states and present a method which is designed to suppress them. Its effectiveness is tested for several baryonic matrix elements, different lattice sizes and pion masses.

  17. Error Analysis and Sampling Design for Ocean Flux Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Bellingham, J. G.; Davis, R. E.; Chavez, F.

    2006-12-01

    In this paper we present error analysis and sampling design for estimating flux of heat or other quantities (e.g., nitrate, oxygen) in the ocean using mobile or stationary platforms. Flux estimation requires sampling the current velocity and the flux variable (e.g., temperature for heat flux) along a boundary. When we run autonomous underwater vehicles (AUVs) on a boundary to take spatial samples, the ocean field evolves over time. This non-synoptic sampling leads to an estimation error of the flux. We formulate the estimation error as a function of the spatio-temporal variability of the studied ocean field, the cross-section area of the boundary, the number of deployed vehicles, and the vehicle speed. Based on the error metric, we design AUV sampling strategies for flux estimation. We also compare the flux estimation performance of using AUVs with that of using traditional mooring arrays. As an example, we study heat flux estimation using statistics from Monterey Bay and for various sampling configurations. The sampling requirement is determined by how fast the product of temperature and normal current velocity varies in time and space. We estimate the temporal and spatial scales of temperature and current velocity by measurements from moorings, bottom-mounted stations, ships, and AUVs. It is found that current velocity varies much faster than temperature in both temporal and spatial domains, hence the variability of their product is quite high. The consequence of this variability on heat flux estimation is presented.

  18. Reducing systematic centroid errors induced by fiber optic faceplates in intensified high-accuracy star trackers.

    PubMed

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  19. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  20. CFD PPLTMG: Using a posteriori error estimates and domain decomposition

    Microsoft Academic Search

    R. Bank; M. Holst; B. Mantel; J. Periaux; C. Zhou

    1998-01-01

    Abstract: This two-part paper examines two approachesfor mesh adaptation, using combinations of a posteriorierror estimates and domain decomposition.In the first part, we consider a domain decompositionmethod applied to the generalized Stokes problem, withmesh adaptation in each subdomain using the a posteriori local error estimator as adaptation indicator. We applydomain decomposition without overlapping, and thecondition of compatibility on the interface is

  1. Some A Posteriori Error Estimators for Elliptic Partial Differential Equations

    Microsoft Academic Search

    Randolph E. Bank; Alan Weiser

    1985-01-01

    . We present three new a posteriori error estimators in the energynorm for finite element solutions to elliptic partial differential equations. Theestimators are based on solving local Neumann problems in each element. Theestimators differ in how they enforce consistency of the Neumann problems.We prove that as the mesh size decreases, under suitable assumptions, two ofthe estimators approach upper bounds on

  2. Error Analysis and Sampling Design for Ocean Flux Estimation

    Microsoft Academic Search

    Y. Zhang; J. G. Bellingham; R. E. Davis; F. Chavez

    2006-01-01

    In this paper we present error analysis and sampling design for estimating flux of heat or other quantities (e.g., nitrate, oxygen) in the ocean using mobile or stationary platforms. Flux estimation requires sampling the current velocity and the flux variable (e.g., temperature for heat flux) along a boundary. When we run autonomous underwater vehicles (AUVs) on a boundary to take

  3. Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Sass, Daniel A.

    2010-01-01

    Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

  4. The Lowest Order Hadronic Contribution to the Muon $g-2$ Value with Systematic Error Correlations

    E-print Network

    D. H. Brown; W. A. Worstell

    1996-07-15

    We have performed a new evaluation of the hadronic contribution to $a_{\\mu}=(g-2)/2$ of the muon with explicit correlations of systematic errors among the experimental data on $\\sigma( e^+e^- \\to hadrons )$. Our result for the lowest order hadronic vacuum polarization contribution is $ a_{\\mu}^{had} = 702.6(7.8)(14.0) \\times 10^{-10}$ where the the first error is statistical and the second is systematic. The total systematic error contributions from below and above $\\sqrt{s} = 1.4$ GeV are $(13.1) \\times 10^{-10}$ and $(5.1) \\times 10^{-10}$ respectively, and are hence dominated by the low energy region. Therefore, new measurements on $\\sigma( e^+e^- \\to hadrons )$ below 1.4 GeV can significantly reduce the total error on $a_{\\mu}^{had}$. In particular, the effect on the total errors of new hypothetical data with 3 \\% statistical and 0.5 - 1.0 \\% systematic errors is presented.

  5. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

    SciTech Connect

    QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-08-25

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

  6. Analysis and Optimization of Classifier Error Estimator Performance within a Bayesian Modeling Framework

    E-print Network

    Dalton, Lori Anne

    2012-07-16

    . Training-data error estimation becomes mandatory, yet none of the popular error estimation techniques have been rigorously designed via statistical inference or optimization. In this investigation, we place classifier error estimation in a framework...

  7. Systematic Error Correction of Dynamical Seasonal Prediction of Sea Surface Temperature Using a Stepwise Pattern Project Method

    Microsoft Academic Search

    Jong-Seong Kug; June-Yi Lee; In-Sik Kang

    2008-01-01

    Every dynamical climate prediction model has significant errors in its mean state and anomaly field, thus degrading its performance in climate prediction. In addition to correcting the model's systematic errors in the mean state, it is also possible to correct systematic errors in the predicted anomalies by means of dynamical or statistical postprocessing. In this study, a new statistical model

  8. Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length

    NASA Technical Reports Server (NTRS)

    Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

    1985-01-01

    Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

  9. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments

    NASA Astrophysics Data System (ADS)

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented.

  10. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  11. Iraq War mortality estimates: A systematic review

    Microsoft Academic Search

    Christine Tapp; Frederick M Burkle Jr; Kumanan Wilson; Tim Takaro; Gordon H Guyatt; Hani Amad; Edward J Mills

    2008-01-01

    BACKGROUND: In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review

  12. Efficient estimation of quantum error correction thresholds in the presence of errors outside the Clifford group

    NASA Astrophysics Data System (ADS)

    Gutierrez, Mauricio; Brown, Kenneth

    2015-03-01

    Classical simulations of noisy stabilizer circuits are often used to estimate the threshold of a quantum error-correcting code (QECC). It is common to model the noise as a depolarizing Pauli channel. However, it is not clear how sensitive a code's threshold is to the noise model, and whether or not a depolarizing channel is a good approximation for realistic errors. We have shown that, at the physical single-qubit level, efficient and more accurate approximations can be obtained. We now examine the feasibility of employing these approximations to obtain better estimates of a QECC's threshold. We calculate the level-1 pseudo-threshold for the Steane [[7,1,3

  13. Minimum Mean Square Error Estimation Under Gaussian Mixture Statistics

    E-print Network

    Flam, John T; Kansanen, Kimmo; Ekman, Torbjorn

    2011-01-01

    This paper investigates the minimum mean square error (MMSE) estimation of x, given the observation y = Hx+n, when x and n are independent and Gaussian Mixture (GM) distributed. The introduction of GM distributions, represents a generalization of the more familiar and simpler Gaussian signal and Gaussian noise instance. We present the necessary theoretical foundation and derive the MMSE estimator for x in a closed form. Furthermore, we provide upper and lower bounds for its mean square error (MSE). These bounds are validated through Monte Carlo simulations.

  14. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  15. Error estimation for the linearized auto-localization algorithm.

    PubMed

    Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  16. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  17. Error propagation and scaling for tropical forest biomass estimates.

    PubMed Central

    Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

    2004-01-01

    The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093

  18. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-06-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  19. Estimating the 4DVAR analysis error of GODAE products

    Microsoft Academic Search

    Brian S. Powell; Andrew M. Moore

    2009-01-01

    We explore the ocean circulation estimates obtained by assimilating observational products made available by the Global Ocean\\u000a Data Assimilation Experiment (GODAE) and other sources in an incremental, four-dimensional variational data assimilation system\\u000a for the Intra-Americas Sea. Estimates of the analysis error (formally, the inverse Hessian matrix) are computed during the\\u000a assimilation procedure. Comparing the impact of differing sea surface height

  20. Forgotten Secret Recovering Scheme and Fuzzy Vault Scheme Constructed Based on Systematic Error-Correcting Codes

    E-print Network

    Forgotten Secret Recovering Scheme and Fuzzy Vault Scheme Constructed Based on Systematic Error Fuzzy Vault Scheme (FVS). Keywords Forgotten Secret Recovering Scheme, Fuzzy Vault Scheme, Reed an interesting problem referred to as "Movie Lover's Problem (MLP)" [10]. They presented a Fuzzy Vault Scheme

  1. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and range rate. The observation errors considered are bias, timing, transit time, tracking station location, polar motion, solid earth tidal displacement, ocean loading displacement, tropospheric and ionospheric refraction, and space plasma. The force model elements considered are the earth's potential, the gravitational constant, solid earth tides, polar radiation pressure, earth reflected radiation, atmospheric drag, and thrust errors. The errors are propagated along the satellite orbital path. The ORAN program is written in FORTRAN IV and ASSEMBLER for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 570K of 8-bit bytes. The ORAN program was developed in 1973 and was last updated in 1980.

  2. Time reversal in thermoacoustic tomography - an error estimate

    E-print Network

    Hristova, Yulia

    2008-01-01

    The time reversal method in thermoacoustic tomography is used for approximating the initial pressure inside a biological object using measurements of the pressure wave made outside the object. This article presents error estimates for the time reversal method in the cases of variable, non-trapping sound speeds.

  3. MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

    E-print Network

    Hartmann, Ralf

    MULTI­TARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN # Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coe subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic computations like compressible

  4. Error estimation and adaptive mesh refinement for aerodynamic flows

    E-print Network

    Hartmann, Ralf

    Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held-oriented mesh refinement for single and multiple aerodynamic force coefficients as well as residual-based mesh refinement applied to various three-dimensional lam- inar and turbulent aerodynamic test cases defined

  5. MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

    E-print Network

    Hartmann, Ralf

    MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coefficients including Navier-Stokes equations AMS subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic

  6. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  7. Bolstered Error Estimation Ulisses Braga-Neto a,c

    E-print Network

    Braga-Neto, Ulisses

    -Validation, Bootstrap. Preprint submitted to Elsevier Science 29 September 2003 #12;1 Introduction Given a classifier- trast to other resampling techniques, such as the bootstrap. We provide an extensive simulation study comparing the proposed method with resubstitution, cross-validation, and bootstrap error estimation

  8. Note: Statistical errors estimation for Thomson scattering diagnostics

    SciTech Connect

    Maslov, M.; Beurskens, M. N. A.; Flanagan, J.; Kempenaars, M. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Collaboration: JET-EFDA Contributors

    2012-09-15

    A practical way of estimating statistical errors of a Thomson scattering diagnostic measuring plasma electron temperature and density is described. Analytically derived expressions are successfully tested with Monte Carlo simulations and implemented in an automatic data processing code of the JET LIDAR diagnostic.

  9. Condition and Error Estimates in Numerical Matrix Computations

    SciTech Connect

    Konstantinov, M. M. [University of Architecture, Civil Engineering and Geodesy, 1046 Sofia (Bulgaria); Petkov, P. H. [Technical University of Sofia, 1000 Sofia (Bulgaria)

    2008-10-30

    This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

  10. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  11. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  12. Systematic errors in ground heat flux estimation and their correction

    E-print Network

    Gentine, P.

    Incoming radiation forcing at the land surface is partitioned among the components of the surface energy balance in varying proportions depending on the time scale of the forcing. Based on a land-atmosphere analytic continuum ...

  13. Evaluation of human error estimation for nuclear power plants

    SciTech Connect

    Haney, L.N.; Blackman, H.S.

    1987-01-01

    The dominant risk for severe accident occurrence in nuclear power plants (NPPs) is human error. The US Nuclear Regulatory Commission (NRC) sponsored an evaluation of Human Reliability Analysis (HRA) techniques for estimation of human error in NPPs. Twenty HRA techniques identified by a literature search were evaluated with criteria sets designed for that purpose and categorized. Data were collected at a commercial NPP with operators responding in walkthroughs of four severe accident scenarios and full scope simulator runs. Results suggest a need for refinement and validation of the techniques. 19 refs.

  14. Error of estimation of community reaction to aircraft noise

    NASA Astrophysics Data System (ADS)

    Fidell, Sanford

    2005-09-01

    Errors and uncertainties of measurement, estimation, and prediction of aircraft noise exposure and of community reaction to it can be so great as to render dosage-effect analyses of community response to aircraft noise unreliable. Biases and other errors may arise during routine monitoring of aircraft noise, via oversimplification and misrepresentation of exposure and sound propagation conditions in predictive noise modeling, through sampling and interviewing procedures, and from poor statistical association in functional relationships between community response and acoustic predictor variables. Some of the more notable uncertainties afflicting the prediction of aircraft noise and its effects are described in this presentation.

  15. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  16. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  17. Establishing Reliability and Validity Estimates for Systematic Classroom Observation.

    ERIC Educational Resources Information Center

    Webb, Jeaninne Nelson; Brown, Bob Burton

    A study was designed to (1) compare two types of reliability in the observation of teachers' behavior, (2) explore the relationship between observer reliability and the validity of their systematic classroom observations, and (3) investigate the effects of training, observer beliefs, and the passage of time on reliability and validity estimates.…

  18. An analysis of errors in special sensor microwave imager evaporation estimates over the global oceans

    NASA Technical Reports Server (NTRS)

    Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.

    1993-01-01

    The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.

  19. Multiscale a posteriori error estimation and mesh adaptivity for reliable finite element analysis

    Microsoft Academic Search

    Ahmed H ElSheikh

    2007-01-01

    The focus of this thesis is on reliable finite element simulations using mesh adaptivity based on a posteriori error estimation. The accuracy of the error estimator is a key step in controlling both the computational error and simulation time. The estimated errors guide the mesh adaptivity algorithm toward a quasi-optimal mesh that conforms with the solution specific features. The simulation

  20. An Error Propagation Analysis of Estimates of Aboveground Biomass Estimates From Lidar Remote Sensing

    NASA Astrophysics Data System (ADS)

    Sherrill, K.; Lefsky, M.; Battles, J.; Waring, K.; Gonzalez, P.

    2006-12-01

    Estimation of aboveground biomass, and associated carbon storage, of forested areas has been one of the key applications for airborne lidar remote sensing. While numerous regression based analyses have shown the capability of lidar data for these purposes, estimates of total aboveground biomass for entire landscapes have been les frequent. To create useful estimates for policy purposes, confidence intervals for aboveground biomass must be developed, e.g. for the purposes of assigning carbon credits for reforestation projects. To obtain realistic estimates of confidence intervals for our aboveground biomass estimates, formal error propagation analysis is required to consider the interactions of the several sources of uncertainty in making these estimates. These include the error of making the measurements required for forest inventory, sampling error, error in the allometric equations that estimate aboveground biomass, and the error associated with the regression equations that estimate aboveground biomass from lidar data. These analyses have been performed for two study areas in Northern California, one in the Yuba District of Tahoe National Forest, and the other in, and in the vicinity of, the Garcia State Forest in California's Coast Range. A two-stage Monte Carlo analysis was used to combine the multiple sources of error listed above, and to create multiple realizations of the aboveground biomass values for the two landscapes. As an example, for the plot-level estimates of aboveground biomass in the Tahoe NF, standard errors increased from 138 Mgha-1 (for simple regression between lidar height metrics and field estimated aboveground biomass, without error propagation) to a median value of 209 Mgha-1 for 1000 Monte Carlo iterations, while the R2 of the regression equation decreased from 80% to a median value of 71% of variance. However, when multiple realizations of aboveground biomass were generated for the landscapes using the error-propagated estimates of the standard error, confidence intervals for aboveground biomass dropped dramatically. For the Tahoe NF study area, the estimate of mean aboveground biomass for the 4722 ha area was 320.0 Mg with a standard deviation of only 0.57 Mgha-1, suitable for policy makers studying the potential for carbon storage in this area.

  1. Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation

    SciTech Connect

    Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold; Hayes, James C.; McIntyre, Justin I.

    2007-04-15

    This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.

  2. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  3. Systematic Steps to Diminish Multi-Fold Medication Errors in Neonates

    PubMed Central

    Pinheiro, Joaquim M. B.; Mitchell, Amy L.; Lesar, Timothy S.

    2003-01-01

    Tenfold and other multiple-of-dose errors are particularly common in the neonatal intensive care unit (NICU), where the fragility of the patients increases the potential for significant adverse outcomes. Such errors can originate at any of the sequential phases of the process, from medication ordering to administration. Each step of calculation, prescription writing, transcription, dose preparation, and administration is an opportunity for generating and preventing medication errors. A few simple principles and practical tips aimed at avoiding decimal and other multiple-dosing errors can be systematically implemented through the various steps of the process. The authors describe their experience with the implementation of techniques for error reduction in a NICU setting. The techniques described herein rely on simple, inexpensive technologies for information and automation, and on standardization and simplification of processes. They can be immediately adapted and applied in virtually any NICU and could be integrated into the development of computerized order entry systems appropriate to NICU settings. Either way, they should decrease the likelihood of undetected human error. PMID:23118682

  4. Background sky obscuration by cluster galaxies as a source of systematic error for weak lensing

    NASA Astrophysics Data System (ADS)

    Simet, Melanie; Mandelbaum, Rachel

    2015-05-01

    Lensing magnification and stacked shear measurements of galaxy clusters rely on measuring the density of background galaxies behind the clusters. The most common ways of measuring this quantity ignore the fact that some fraction of the sky is obscured by the cluster galaxies themselves, reducing the area in which background galaxies can be observed. We discuss the size of this effect in the Sloan Digital Sky Survey (SDSS) and the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS), finding a minimum 1 per cent effect at 0.1 h-1 Mpc from the centres of clusters in SDSS; the effect is an order of magnitude higher in CFHTLenS. The resulting biases on cluster mass and concentration measurements are of the same order as the size of the obscuration effect, which is below the statistical errors for cluster lensing in SDSS but likely exceeds them for CFHTLenS. We also forecast the impact of this systematic error on cluster mass and magnification measurements in several upcoming surveys, and find that it typically exceeds the statistical errors. We conclude that future surveys must account for this effect in stacked lensing and magnification measurements in order to avoid being dominated by systematic error.

  5. Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

    NASA Astrophysics Data System (ADS)

    Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

    2014-05-01

    Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more successful CGCM performance, as the effects of remote biases may override them. Therefore, efforts to reduce CGCM errors cannot be narrowly focused on particular regions.

  6. Estimating the coverage of mental health programmes: a systematic review

    PubMed Central

    De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

    2014-01-01

    Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874

  7. A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series

    NASA Astrophysics Data System (ADS)

    Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team

    2011-01-01

    In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

  8. Shannon Capacity and Symbol Error Rate of Space-Time Block Codes in MIMO Rayleigh Channels With Channel estimation Error

    Microsoft Academic Search

    Kyung Seung Ahn; Robert W. Heath Jr.; Heung Ki Baik

    2008-01-01

    Space-time block coding (STBC) is an attractive solution for improving quality in wireless links. In this paper, we analyze the impact of channel estimation error on the ergodic capacity and symbol error rate (SER) for space-time block coded multiple-input multiple-output (MIMO) systems. We derive a closed-form capacity expression over MIMO Rayleigh channels with channel estimation error. Moreover, we derive an

  9. Precision calibration and systematic error reduction in the long trace profiler

    SciTech Connect

    Qian, Shinan; Sostero, Giovanni [Sincrotrone Trieste, 34012 Basovizza, Trieste, (Italy)] [Sincrotrone Trieste, 34012 Basovizza, Trieste, (Italy); Takacs, Peter Z. [Brookhaven National Laboratory, Building 535B, Upton, New York 11973 (United States)] [Brookhaven National Laboratory, Building 535B, Upton, New York 11973 (United States)

    2000-01-01

    The long trace profiler (LTP) has become the instrument of choice for surface figure testing and slope error measurement of mirrors used for synchrotron radiation and x-ray astronomy optics. In order to achieve highly accurate measurements with the LTP, systematic errors need to be reduced by precise angle calibration and accurate focal plane position adjustment. A self-scanning method is presented to adjust the focal plane position of the detector with high precision by use of a pentaprism scanning technique. The focal plane position can be set to better than 0.25 mm for a 1250-mm-focal-length Fourier-transform lens using this technique. The use of a 0.03-arcsec-resolution theodolite combined with the sensitivity of the LTP detector system can be used to calibrate the angular linearity error very precisely. Some suggestions are introduced for reducing the system error. With these precision calibration techniques, accuracy in the measurement of figure and slope error on meter-long mirrors is now at a level of about 1 {mu}rad rms over the whole testing range of the LTP. (c) 2000 Society of Photo-Optical Instrumentation Engineers.

  10. Estimation and sample size calculations for correlated binary error rates of biometric identification devices

    E-print Network

    Schuckers, Michael E.

    Estimation and sample size calculations for correlated binary error rates of biometric in FARs and FRRs is the need to de- termine the sample size necessary to estimate a given error rate to within a specified margin of error,e. g. Snedecor and Cochran (1995). Sample size calcula- tions exist

  11. A non-line-of-sight error mitigation algorithm in location estimation

    Microsoft Academic Search

    Pi-Chun Chen

    1999-01-01

    The location estimation of mobile telephones is of great current interest. The two sources of range measurement errors in geolocation techniques are measuring error and non-line-of-sight (NLOS) error. The NLOS errors, derived from the blocking of direct paths, have been considered as a killer issue in the location estimation. In this paper we develop an algorithm to mitigate the NLOS

  12. An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.

    USGS Publications Warehouse

    Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

    1983-01-01

    Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

  13. Polynomial loss models for economic dispatch and error estimation

    SciTech Connect

    Jiang, A.; Ertem, S. [Univ. of Missouri, Kansas City, MO (United States)] [Univ. of Missouri, Kansas City, MO (United States)

    1995-08-01

    Polynomial loss models are introduced for the economic dispatch problem. The models are based on interpolations of load flow solutions. An approximate error estimation method for the loss models is also presented. The effect of approximate loss models on the economic dispatch is evaluated according to the deterioration of total generation cost in addition to the relative values of the coefficients of the loss formula. Case study shows that loss expressions have characteristics which have not been considered previously. Comparisons between the proposed models and the generalized generation distribution factor (GGDF) based models show the advantages of the proposed models.

  14. Surface air temperature simulations by AMIP general circulation models: Volcanic and ENSO signals and systematic errors

    SciTech Connect

    Mao, J.; Robock, A. [Univ. of Maryland, College Park, MD (United States). Dept. of Meteorology] [Univ. of Maryland, College Park, MD (United States). Dept. of Meteorology

    1998-07-01

    Thirty surface air temperature simulations for 1979--88 by 29 atmospheric general circulation models are analyzed and compared with the observations over land. These models were run as part of the Atmospheric Model Intercomparison Project (AMIP). Several simulations showed serious systematic errors, up to 4--5 C, in globally averaged land air temperature. The 16 best simulations gave rather realistic reproductions of the mean climate and seasonal cycle of global land air temperature, with an average error of {minus}0.9 C for the 10-yr period. The general coldness of the model simulations is consistent with previous intercomparison studies. The regional systematic errors showed very large cold biases in areas with topography and permanent ice, which implies a common deficiency in the representation of snow-ice albedo in the diverse models. The SST and sea ice specification of climatology rather than observations at high latitudes for the first three years (1979--81) caused a noticeable drift in the neighboring land air temperature simulations, compared to the rest of the years (1982--88). Unsuccessful simulation of the extreme warm (1981) and cold (1984--85) periods implies that some variations are chaotic or unpredictable, produced by internal atmospheric dynamics and not forced by global SST patterns.

  15. On the Systematic Errors in the Detection of the Lense-Thirring Effect with a Mars Orbiter

    E-print Network

    Giampiero Sindoni; Claudio Paris; Paolo Ialongo

    2007-01-26

    We show here that the recent claim of a test of the Lense-Thirring effect with an error of 0.5% using the Mars Global Surveyor is misleading and the quoted error is incorrect by a factor of at least ten thousand. Indeed, the simple error analysis of [1] neglects the role of some important systematic errors affecting the out-of-plane acceleration. The preliminary error analysis presented here shows that even an optimistic uncertainty for this measurement is at the level of, at least, ~ 3026% to ~ 4811%, i.e., even an optimistic uncertainty is about 30 to 48 times the Lense-Thirring effect. In other words by including only some systematic errors we obtained an uncertainty almost ten thousand times larger than the claimed 0.5% error.

  16. Estimating the error in simulation prediction over the design space

    SciTech Connect

    Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)

    2003-01-01

    This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.

  17. The Bias of the Maximum Likelihood Estimator of the Slope in the Measurement Error Regression Model

    E-print Network

    Ramirez, Donald E.

    expected) that the bias of the MLE estimator of the slope in the measurement error regression modelThe Bias of the Maximum Likelihood Estimator of the Slope in the Measurement Error Regression Model /2 . However for a fixed estimated variances error ratio , it was noted that the bias

  18. A PRIORI ERROR ESTIMATES FOR NUMERICAL METHODS FOR SCALAR CONSERVATION LAWS.

    E-print Network

    A PRIORI ERROR ESTIMATES FOR NUMERICAL METHODS FOR SCALAR CONSERVATION LAWS. PART III is the third of a series in which a general theory of a priori error estimates for scalar conservation laws. A priori error estimates, irregular grids, monotone schemes, conservation laws, supraconvergence AMS

  19. Using image area to control CCD systematic errors in spaceborne photometric and astrometric time-series measurements

    NASA Technical Reports Server (NTRS)

    Buffington, Andrew; Booth, Corwin H.; Hudson, Hugh S.

    1991-01-01

    The effect of some systematic errors for high-precision time-series spaceborne photometry and astrometry has been investigated with a CCD as the detector. The 'pixelization' of the images causes systematic error in astrometric measurements. It is shown that this pixelization noise scales as image radius r exp -3/2. Subpixel response gradients, not correctable by the 'flat field', and in conjunction with telescope pointing jitter, introduce further photometric and astrometric errors. Subpixel gradients are modeled using observed properties of real flat fields. These errors can be controlled by having an image span enough pixels. Large images are also favored by CCD dynamic range considerations. However, magnified stellar images can overlap, thus introducing another source of systematic error. An optimum image size is therefore a compromise between these competing factors.

  20. Systematic errors on curved microstructures caused by aberrations in confocal surface metrology.

    PubMed

    Rahlves, Maik; Roth, Bernhard; Reithmeier, Eduard

    2015-04-20

    Optical aberrations of microscope lenses are known as a source of systematic errors in confocal surface metrology, which has become one of the most popular methods to measure the surface topography of microstructures. We demonstrate that these errors are not constant over the entire field of view but also depend on the local slope angle of the microstructure and lead to significant deviations between the measured and the actual surface. It is shown by means of a full vectorial high NA numerical model that a change in the slope angle alters the shape of the intensity depth response of the microscope and leads to a shift of the intensity peak of up to several hundred nanometers. Comparative experimental data are presented which support the theoretical results. Our studies allow for correction of optical aberrations and, thus, increase the accuracy in profilometric measurements. PMID:25969000

  1. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  2. Noncontact thermometry via laser pumped, thermographic phosphors: Characterization of systematic errors and industrial applications

    SciTech Connect

    Gillies, G.T.; Dowell, L.J.; Lutz, W.N.; Allison, S.W.; Cates, M.R.; Noel, B.W.; Franks, L.A.; Borella, H.M.

    1987-10-01

    There are a growing number of industrial measurement situations that call for a high precision, noncontact method of thermometry. Our collaboration has been successful in developing one such method based on the laser-induced fluorescence of rare-earth-doped ceramic phosphors like Y/sub 2/O/sub 3/:Eu. In this paper, we summarize the results of characterization studies aimed at identifying the sources of systematic error in a laboratory-grade version of the method. We then go on to present data from measurements made in the afterburner plume of a jet turbine and inside an operating permanent magnet motor. 12 refs., 6 figs.

  3. Quantification and Estimation of Differential Odometry Errors in Mobile Robotics with Redundant Sensor Information

    Microsoft Academic Search

    Alexander Rudolph

    2003-01-01

    By the extrapolation of movement increments detected by differential encoders, the position of a mobile robot can be easily computed. However, the encoders suffer from various systematic errors resulting in an increasing error of the obtained robot position. This problem is also known from the scope of inertial navigation, but the transfer of the respective concepts of maintaining merely errors

  4. Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget

    NASA Astrophysics Data System (ADS)

    Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric

    2015-05-01

    We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.

  5. Method for the detection of variable systematic errors without a priori information on the parameters of the correlation function

    Microsoft Academic Search

    P. S. Mandel'shtam

    1988-01-01

    Many problems in the metrological certification and testing of the parameters of various objects in industry and scientific research call for the detection of systematic variations against a background of random fluctuations. Such problems arise, e.g., in the investigation of slowly varying systematic errors, in the application of repeated measurements to improve the accuracy of measurement results, in the generation

  6. A quantitative analysis of grid-related systematic errors in oxidising capacity and ozone production rates in chemistry transport models

    Microsoft Academic Search

    J. G. Esler; G. J. Roelofs; M. O. Köhler; F. M. O'Connor

    2004-01-01

    Limited resolution in chemistry transport models (CTMs) is necessarily associated with systematic errors in the calculated chemistry, due to the artificial mixing of species on the scale of the model grid (grid-averaging). Here, the errors in calculated hydroxyl radical (OH) concentrations and ozone production rates 3 are investigated quantitatively using both direct observations and model results. Photochemical steady-state models of

  7. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  8. Systematic errors in the determination of Hubble constant due to the asphericity and non-isothermality of clusters of galaxies

    E-print Network

    Y. -G. Wang; Z. -H. Fan

    2006-02-06

    Joint analyses on X-ray and Sunyaev-Zel'dovich (SZ) effect of a cluster of galaxies can give rise to an estimate on the angular diameter distance to the cluster. With the redshift information of the cluster, the Hubble constant $H_0$ can then be derived. Furthermore, such measurements on a sample of clusters with a range of redshift can potentially be used to discriminate different cosmological models. In this paper, we present statistical studies on the systematic errors in the determination of $H_0$ due to the triaxiality and non-isothermality of clusters of galaxies. Different from many other studies that assume artificially a specific distribution for the intracluster gas, we start from the triaxial model of dark matter halos obtained from numerical simulations. The distribution of the intracluster gas is then derived under the assumption of the hydrodynamic equilibrium. For the equation of state of the intracluster gas, both the isothermal and the polytropic cases are investigated. We run Monte Carlo simulations to generate samples of clusters according to the distributions of their masses, axial ratios, concentration parameters, as well as line-of-sight directions. To mimic observations, the estimation of the Hubble constant is done by fitting X-ray and SZ profiles of a triaxial cluster with the isothermal and spherical $\\beta$-model. We find that for a sample of clusters with $M=10^{14}h^{-1}\\hbox{M}_{\\odot}$ and $z=0.1$, the value of the estimated $H_0$ is positively biased with $H_0^{peak}(estimated)\\approx 1.05H_0(true)$ and $H_0^{ave}(estimated)\\approx 1.05H_0(true)$ for the isothermal case. For the polytropic case with $\\gamma=1.15$, the bias is rather large with $H_0^{peak}(estimated)\\approx 1.35H_0(true)$ and $H_0^{ave}(estimated)\\approx 3H_0(true)$. (abridged)

  9. Error estimation for CFD aeroheating prediction under rarefied flow condition

    NASA Astrophysics Data System (ADS)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ? is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ?, compared with two other parameters, Kn? and Ma?Kn?.

  10. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance

    Microsoft Academic Search

    John M Hickey; Roel F Veerkamp; Mario PL Calus; Han A Mulder; Robin Thompson

    2009-01-01

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if

  11. Error estimates for the Skyrme-Hartree-Fock model

    E-print Network

    J. Erler; P. -G. Reinhard

    2014-08-01

    There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.

  12. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  13. Effects of measurement error on horizontal hydraulic gradient estimates.

    PubMed

    Devlin, J F; McElwee, C D

    2007-01-01

    During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an approximately 500-m(2) site. Additional wells were installed to increase the monitored area to 26,500 m(2), and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of +/-1.3 x 10(-2) m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 x 10(-4)+/-25% with a flow direction of 56 degrees southeast +/-18 degrees, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis. PMID:17257340

  14. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  15. Parameter Estimation In Ensemble Data Assimilation To Characterize Model Errors In Surface-Layer Schemes Over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Lee, Jared; Lei, Lili

    2014-05-01

    Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.

  16. A posteriori error estimation and mesh adaptation for finite element models in elasto-plasticity

    Microsoft Academic Search

    Rolf Rannacher; Franz-Theo Suttmeier

    1999-01-01

    A new approach to a posteriori error estimation and adaptive mesh design based on techniques from optimal control is presented for primal-mixed finite element models in elasto-plasticity. This method uses global duality arguments for deriving weighted a posteriori error bounds for arbitrary functionals of the error representing physical quantities of interest. In these estimates local residuals of the computed solution

  17. A robust SUPG norm a posteriori error estimator for stationary convectiondiffusion equations

    E-print Network

    John, Volker

    A robust SUPG norm a posteriori error estimator for stationary convection­diffusion equations Available online 14 December 2012 Keywords: Stationary convection­diffusion equations SUPG finite element method Error in SUPG norm A posteriori error estimator Adaptive grid refinement a b s t r a c t A robust

  18. Adaptive error covariances estimation methods for ensemble Kalman filters

    NASA Astrophysics Data System (ADS)

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry-Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry-Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry-Sauer's method on L-96 example.

  19. Quantifying and minimising systematic and random errors in X-ray micro-tomography based volume measurements

    NASA Astrophysics Data System (ADS)

    Lin, Q.; Neethling, S. J.; Dobson, K. J.; Courtois, L.; Lee, P. D.

    2015-04-01

    X-ray micro-tomography (XMT) is increasingly used for the quantitative analysis of the volumes of features within the 3D images. As with any measurement, there will be error and uncertainty associated with these measurements. In this paper a method for quantifying both the systematic and random components of this error in the measured volume is presented. The systematic error is the offset between the actual and measured volume which is consistent between different measurements and can therefore be eliminated by appropriate calibration. In XMT measurements this is often caused by an inappropriate threshold value. The random error is not associated with any systematic offset in the measured volume and could be caused, for instance, by variations in the location of the specific object relative to the voxel grid. It can be eliminated by repeated measurements. It was found that both the systematic and random components of the error are a strong function of the size of the object measured relative to the voxel size. The relative error in the volume was found to follow approximately a power law relationship with the volume of the object, but with an exponent that implied, unexpectedly, that the relative error was proportional to the radius of the object for small objects, though the exponent did imply that the relative error was approximately proportional to the surface area of the object for larger objects. In an example application involving the size of mineral grains in an ore sample, the uncertainty associated with the random error in the volume is larger than the object itself for objects smaller than about 8 voxels and is greater than 10% for any object smaller than about 260 voxels. A methodology is presented for reducing the random error by combining the results from either multiple scans of the same object or scans of multiple similar objects, with an uncertainty of less than 5% requiring 12 objects of 100 voxels or 600 objects of 4 voxels. As the systematic error in a measurement cannot be eliminated by combining the results from multiple measurements, this paper introduces a procedure for using volume standards to reduce the systematic error, especially for smaller objects where the relative error is larger.

  20. CarbonSat: Quantification of random and systematic errors of column-averaged CO2 and methane retrievals

    NASA Astrophysics Data System (ADS)

    Buchwitz, M.; Reuter, M.; Schneising, O.; Heymann, J.; Bovensmann, H.; Burrows, J. P.

    2012-04-01

    The Carbon Monitoring Satellite (CarbonSat, http://www.iup.uni-bremen.de/carbonsat) has been selected by ESA end of 2010 to be one of two Earth Explorer Opportunity candidate missions (Earth Explorer 8, EE-8) to be launched around 2019. The main goal of CarbonSat is to deliver improved information on carbon dioxide (CO2) and methane (CH4) surface sources (emissions) and sinks as needed for better climate prediction and other applications such as greenhouse gas emission monitoring. For mission optimization and data quality estimation a retrieval algorithm called BESD is under development at University of Bremen, Germany. BESD is being optimized to accurately retrieve column-averaged mole fractions of CO2 and CH4, XCO2 and XCH4, from the CarbonSat spectral observations. This algorithm is being used to quantify random and systematic XCO2 and XCH4 retrieval errors, e.g., due to thin cirrus, aerosols and terrestrial vegetation chlorophyll fluorescence. The current status of this ongoing activity will be presented focusing on XCO2 and XCH4 retrieval errors but also on the expected quality of interesting by-products such as vegetation chlorophyll fluorescence.

  1. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)] [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States)] [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada)] [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States)] [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States)] [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)] [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

  2. RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

    SciTech Connect

    GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

    1998-06-22

    The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

  3. On The So-Called "Huber Sandwich Estimator" and "Robust Standard Errors" David A. Freedman

    E-print Network

    Sekhon, Jasjeet S.

    On The So-Called "Huber Sandwich Estimator" and "Robust Standard Errors" by David A. Freedman Abstract The "Huber Sandwich Estimator" can be used to estimate the variance of the MLE when the underlying, and robustification is unlikely to help much. On the other hand, if the model is seriously in error, the sandwich may

  4. A case study to investigate sensitivity of reliability estimates to errors in operational profile

    Microsoft Academic Search

    Mei-Hwa Chen; Aditya P. Mathur; Vernon Rego

    1994-01-01

    We report a case study to investigate the effect of errors in an operational profile on reliability estimates. A previously reported tool named TERSE was used in this study to generate random flow graphs representing programs, model errors in operational profile, and compute reliability estimates. Four models for reliability estimation were considered: the Musa-Okumoto model, the Goel-Okumoto model, coverage enhanced

  5. Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation

    Microsoft Academic Search

    Bradley Efron

    1983-01-01

    We construct a prediction rule on the basis of some data, and then wish to estimate the error rate of this rule in classifying future observations. Cross-validation provides a nearly unbiased estimate, using only the original data. Cross-validation turns out to be related closely to the bootstrap estimate of the error rate. This article has two purposes: to understand better

  6. Error Analysis and Sampling Strategy Design for Using Fixed or Mobile Platforms to Estimate Ocean Flux

    Microsoft Academic Search

    Yanwu Zhang; James G. Bellingham; Yi Chao

    2010-01-01

    For estimating lateral flux in the ocean using fixed or mobile platforms, the authors present a method of analyzing the estimation error and designing the sampling strategy. When an array of moorings is used, spatial aliasing leads to an error in flux estimation. When an autonomous underwater vehicle (AUV) is run, mea- surements along its course are made at different

  7. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  8. Strategies for Assessing Diffusion Anisotropy on the Basis of Magnetic Resonance Images: Comparison of Systematic Errors

    PubMed Central

    Boujraf, Saïd

    2014-01-01

    Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

  9. Calibration of geometrical systematic error in high-precision spherical surface measurement

    NASA Astrophysics Data System (ADS)

    Wang, Daodang; Yang, Yongying; Chen, Chen; Zhuo, Yongmo

    2011-08-01

    Geometric aberrations in interferometric testing system can significantly influence the measurement results in the case of high-numerical-aperture test spherical surface, in which obvious high-order aberrations introduced by wavefront defocus could be observed and they cannot be removed with the traditional calibration method. A technique based on the rigorous model for the analysis of geometric aberrations introduced by wavefront tilt and defocus, is presented for the calibration of the corresponding geometrical systematic error. The calibration method can be carried out either with or without a prior knowledge of the spherical surface under test. The feasibility of the proposed method has been demonstrated by computer simulation, and the residual error less than 0.001? is obtained. Experimental validation is carried out by testing a high-numerical-aperture spherical surface with the ZYGO interferometer, and an accuracy RMS about 0.003? with the proposed calibration technique is achieved. The effect of geometric aberrations on the measurement is discussed in detail. The proposed calibration method provides a feasible way to lower the requirement on the adjusting precision of mechanical device, and is of great practicality for the high-precision measurement of high-numerical-aperture spherical surface.

  10. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x?error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x?error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  11. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    SciTech Connect

    Ju, Lili [University of South Carolina; Tian, Li [University of South Carolina; Wang, Desheng [Nanyang Technological University

    2009-01-01

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  12. Speech enhancement using a minimum mean-square error log-spectral amplitude estimator

    Microsoft Academic Search

    Y. Ephraim; D. Malah

    1985-01-01

    In this correspondence we derive a short-time spectral amplitude (STSA) estimator for speech signals which minimizes the mean-square error of the log-spectra (i.e., the original STSA and its estimator) and examine it in enhancing noisy speech. This estimator is also compared with the corresponding minimum mean-square error STSA estimator derived previously. It was found that the new estimator is very

  13. Estimating spatial and parameter error in parameterized nonlinear reaction-diffusion equations.

    SciTech Connect

    Carey, Graham F. (University of Texas at Austin, Austin TX); Carnes, Brian R. (University of Texas at Austin, Austin TX)

    2005-05-01

    A new approach is proposed for the a posteriori error estimation of both global spatial and parameter error in parameterized nonlinear reaction-diffusion problems. The technique is based on linear equations relating the linearized spatial and parameter error to the weak residual. Computable local element error indicators are derived for local contributions to the global spatial and parameter error, along with corresponding global error indicators. The effectiveness of the error indicators is demonstrated using model problems for the case of regular points and simple turning points. In addition, a new turning point predictor and adaptive algorithm for accurately computing turning points are introduced.

  14. Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors

    NASA Astrophysics Data System (ADS)

    Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

    2013-05-01

    GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

  15. An estimate of asthma prevalence in Africa: a systematic analysis

    PubMed Central

    Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

    2013-01-01

    Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge. PMID:24382846

  16. A quantitative analysis of grid-related systematic errors in oxidising capacity and ozone production rates in chemistry transport models

    Microsoft Academic Search

    J. G. Esler; G. J. Roelofs; M. O. Kohler; F. M. O’Connor

    2004-01-01

    Limited resolution in chemistry transport mod- els (CTMs) is necessarily associated with systematic errors in the calculated chemistry, due to the artificial mixing of species on the scale of the model grid (grid-averaging). Here, the errors in calculated hydroxyl radical (OH) concentrations and ozone production rates P(O3) are investigated quantita- tively using both direct observations and model results. Pho- tochemical

  17. Evaluating concentration estimation errors in ELISA microarray experiments

    SciTech Connect

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

    2005-01-26

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

  18. A measurement of the systematic astrometric error in GeMS and the short-term astrometric precision in ShaneAO

    NASA Astrophysics Data System (ADS)

    Ammons, S. M.; Neichel, Benoit; Lu, Jessica; Gavel, Donald T.; Srinath, Srikar; McGurk, Rosalie; Rudy, Alex; Rockosi, Connie; Marois, Christian; Macintosh, Bruce; Savransky, Dmitry; Galicher, Raphael; Bendek, Eduardo; Guyon, Olivier; Marin, Eduardo; Garrel, Vincent; Sivo, Gaetano

    2014-08-01

    We measure the long-term systematic component of the astrometric error in the GeMS MCAO system as a function of field radius and Ks magnitude. The experiment uses two epochs of observations of NGC 1851 separated by one month. The systematic component is estimated for each of three field of view cases (15'' radius, 30'' radius, and full field) and each of three distortion correction schemes: 8 DOF/chip + local distortion correction (LDC), 8 DOF/chip with no LDC, and 4 DOF/chip with no LDC. For bright, unsaturated stars with 13 < Ks < 16, the systematic component is < 0.2, 0.3, and 0.4 mas, respectively, for the 15'' radius, 30'' radius, and full field cases, provided that an 8 DOF/chip distortion correction with LDC (for the full-field case) is used to correct distortions. An 8 DOF/chip distortion-correction model always outperforms a 4 DOF/chip model, at all field positions and magnitudes and for all field-of-view cases, indicating the presence of high-order distortion changes. Given the order of the models needed to correct these distortions (~8 DOF/chip or 32 degrees of freedom total), it is expected that at least 25 stars per square arcminute would be needed to keep systematic errors at less than 0.3 milliarcseconds for multi-year programs. We also estimate the short-term astrometric precision of the newly upgraded Shane AO system with undithered M92 observations. Using a 6-parameter linear transformation to register images, the system delivers ~0.3 mas astrometric error over short-term observations of 2-3 minutes.

  19. Estimation of the Phase-Error Function for Autofocussing of SAR Raw Data

    E-print Network

    1 Estimation of the Phase-Error Function for Autofocussing of SAR Raw Data Ridha Farhoud Gottfried-error function of SAR raw data is described, which exploits reflectors consisting of several neighbouring point signal of the point target. The entire phase-error function of a SAR image strip is then constructed from

  20. Position and speed sensorless control for PMSM drive using direct position error estimation

    Microsoft Academic Search

    Kiyoshi Sakamoto; Yoshitaka Iwaji; Tsunehiro Endo; T. Takakura

    2001-01-01

    A new position and speed sensorless control approach is proposed for permanent magnet synchronous motor (PMSM) drives. The controller directly computes an error for the estimated rotor position and adjusts the speed according to this error. The derivation of the position error equation and an idea for eliminating the differential terms, are presented. The proposed approach is applied to a

  1. Estimating Equating Error in Observed-Score Equating. Research Report.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and in the population of examinees. This definition underlies, for example, the well-known approximation to the standard error of equating by Lord (1982).…

  2. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.

  3. Monte Carlo analysis of inaccuracies in estimated aircraft parameters caused by unmodeled flight instrumentation errors

    NASA Technical Reports Server (NTRS)

    Hodge, W. F.; Bryant, W. H.

    1975-01-01

    An output error estimation algorithm was used to evaluate the effects of both static and dynamic instrumentation errors on the estimation of aircraft stability and control parameters. A Monte Carlo error analysis, using simulated cruise flight data, was performed for a high-performance military aircraft, a large commercial transport, and a small general aviation aircraft. The results indicate that unmodeled instrumentation errors can cause inaccuracies in the estimated parameters which are comparable to their nominal values. However, the corresponding perturbations to the estimated output response trajectories and characteristics equation pole locations appear to be relatively small. Control input errors and dynamic lags were found to be in the most significant of the error sources evaluated.

  4. Reliable random error estimation in the measurement of line-strength indices

    E-print Network

    N. Cardiel; J. Gorgas; J. Cenarro; J. J. Gonzalez

    1997-06-12

    We present a new set of accurate formulae for the computation of random errors in the measurement of atomic and molecular indices. The new expressions are in excellent agreement with numerical simulations. We have found that, in some cases, the use of approximated equations can give misleading line-strength index errors. It is important to note that accurate errors can only be achieved after a full control of the error propagation throughout the data reduction with a parallel processing of data and error frames. Finally, simple recipes for the estimation of the required signal-to-noise ratio to achieve a fixed index error are presented.

  5. Reliable random error estimation in the measurement of line-strength indices

    E-print Network

    Cardiel, N; Cenarro, J; González, J J

    1997-01-01

    We present a new set of accurate formulae for the computation of random errors in the measurement of atomic and molecular indices. The new expressions are in excellent agreement with numerical simulations. We have found that, in some cases, the use of approximated equations can give misleading line-strength index errors. It is important to note that accurate errors can only be achieved after a full control of the error propagation throughout the data reduction with a parallel processing of data and error frames. Finally, simple recipes for the estimation of the required signal-to-noise ratio to achieve a fixed index error are presented.

  6. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    Microsoft Academic Search

    Raymond J. Carroll; Xiaohong Chen; Yingyao Hu

    2010-01-01

    This paper considers identification and estimation of a general nonlinear errors-in-variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values, and neither sample contains an accurate measurement of the corresponding true variable.

  7. Estimation of parameter errors in inverse problems. Determining limb-darkening coefficients in classical eclipsing systems

    NASA Astrophysics Data System (ADS)

    Abubekerov, M. K.; Gostev, N. Yu.; Cherepashchuk, A. M.

    2009-08-01

    The estimation of parameters and their errors is considered using the observed light curve of the eclipsing binary system YZ Cas as an example. The error intervals are calculated using the differential-correction and confidence-region methods. The error intervals and reliability of the methods are investigated, and the reliability of limb-darkening coefficients derived from the observed light curve analyzed. A new method for calculating parameter errors is proposed.

  8. Goal-oriented error estimation based on equilibrated-flux reconstruction for finite element

    E-print Network

    Paris-Sud XI, Université de

    Goal-oriented error estimation based on equilibrated-flux reconstruction for finite elementC 3A7, Canada Abstract We propose an approach for goal-oriented error estimation in finite element in the cases where a conforming finite element method, a dG method, or a mixed Raviart- Thomas method are used

  9. Capacity of MIMO Systems: Impact of Spatial Correlation with Channel Estimation Errors

    E-print Network

    Zhou, Xiangyun "Sean"

    Capacity of MIMO Systems: Impact of Spatial Correlation with Channel Estimation Errors Xiangyun on the ergodic capacity of multiple-input multiple-output (MIMO) systems with channel estimation errors in block maximizes an ergodic capacity lower bound. For correlated MIMO systems, the authors in [4] studied

  10. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  11. Cross-Validation, the Jackknife, and the Bootstrap: Excess Error Estimation in Forward Logistic Regression

    Microsoft Academic Search

    Gail Gong

    1986-01-01

    Given a prediction rule based on a set of patients, what is the probability of incorrectly predicting the outcome of a new patient? Call this probability the true error. An optimistic estimate is the apparent error, or the proportion of incorrect predictions on the original set of patients, and it is the goal of this article to study estimates of

  12. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    E-print Network

    Conway, Courtney J.

    Field evaluation of distance-estimation error during wetland-dependent bird surveys Christopher P to calling birds in a wetland ecosystem. Methods. We used two approaches to estimate the error associated-count surveys involve recording a distance between the survey point and individual birds detected during

  13. Error estimation in quantities of interest by locally equilibrated superconvergent patch recovery

    E-print Network

    Paris-Sud XI, Université de

    , 2012 1Institute of Mechanics and Advanced Materials (IMAM), Cardiff School of Engineering, Cardiff intensity factor for fracture problems,...). These GOEE are one of the key unsolved problems of advanced engineering firms in the aerospace industry. Residual-based error estimators can be used to estimate errors

  14. Error Analysis for Silhouette--Based 3D Shape Estimation from Multiple Views

    E-print Network

    This paper presents an error analysis for 3D shape estimation techniques using silhouettes from multi­ pleError Analysis for Silhouette--Based 3D Shape Estimation from Multiple Views Wolfgang Niem Institut views (''shape--from--silhouette''). The results of this analysis are useful for the integration

  15. Estimation of errors in the fault analysis of six phase transmission lines using transposed models

    SciTech Connect

    Mishra, A.K.; Chandrasekaran, A. [Tennessee Technological Univ., Cookeville, TN (United States). Dept. of Electrical Engineering] [Tennessee Technological Univ., Cookeville, TN (United States). Dept. of Electrical Engineering; Venkata, S.S. [Univ. of Washington, Seattle, WA (United States). Dept. of Electrical Engineering] [Univ. of Washington, Seattle, WA (United States). Dept. of Electrical Engineering

    1995-07-01

    Fault analysis of power systems represented by transposed line models result in errors that are less tolerable in high phase order systems than in 3-phase systems. Concepts of vector and matrix norm theory are applied to estimate such errors in 6-phase systems without actually making exhaustive fault studies. Error estimates are compared to actual values in typical 3-phase and 6-phase systems.

  16. Submitted to Math. Comp. ON THE ERROR ESTIMATES FOR THE ...

    E-print Network

    2002-02-11

    we also analyze a singular perturbation of the Navier–Stoke equations that .... Since for projection methods the treatment of nonlinear term does not contribute ..... Let us first write the equations that control the time increments of the errors. We.

  17. On the error estimates for the rotational pressure-correction ...

    E-print Network

    2004-06-11

    Dec 19, 2003 ... also analyze a singular perturbation of the Navier-Stokes equations that .... Since for projection methods the treatment of the nonlinear term does ..... Let us first write the equations that control the time increments of the errors.

  18. Comparison of weak lensing by NFW and Einasto halos and systematic errors

    E-print Network

    Sereno, Mauro; Moscardini, Lauro

    2015-01-01

    Recent $N$-body simulations have shown that Einasto radial profiles provide the most accurate description of dark matter halos. Predictions based on the traditional NFW functional form may fail to describe the structural properties of cosmic objects at the percent level required by precision cosmology. We computed the systematic errors expected for weak lensing analyses of clusters of galaxies if one wrongly models the lens properties. Even though the NFW fits of observed tangential shear profiles can be excellent, viral masses and concentrations of very massive halos ($>\\sim10^{15}M_\\odot/h$) can be over- and underestimated by $\\sim 10$ per cent, respectively. Misfitting effects also steepen the observed mass-concentration relation, in a way similar to that seen in multiwavelength observations of galaxy groups and clusters. Einasto lenses can be distinguished from NFW halos either with deep observations of very massive structures ($>\\sim10^{15}M_\\odot/h$) or by stacking the shear profiles of thousands of gro...

  19. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  20. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  1. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric

    E-print Network

    Heaton, Thomas H.

    .1029/2007GL030889. 1. Introduction [2] In airborne laser swath mapping (ALSM) also known as airborne LIDAR, vectors measured by a scanning LASER ranging instrument are added to the position of the aircraft. The primary sources of error are scanning pointing errors, range errors, aircraft orientation errors

  2. DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS

    SciTech Connect

    Giuppone, C. A.; Beauge, C. [Observatorio Astronomico, Universidad Nacional de Cordoba, Cordoba (Argentina); Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A. [Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Sao Paulo (Brazil)

    2009-07-10

    We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

  3. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. PMID:24795033

  4. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  5. Accounting for uncertainty in systematic bias in exposure estimates used in relative risk regression

    SciTech Connect

    Gilbert, E.S.

    1995-12-01

    In many epidemiologic studies addressing exposure-response relationships, sources of error that lead to systematic bias in exposure measurements are known to be present, but there is uncertainty in the magnitude and nature of the bias. Two approaches that allow this uncertainty to be reflected in confidence limits and other statistical inferences were developed, and are applicable to both cohort and case-control studies. The first approach is based on a numerical approximation to the likelihood ratio statistic, and the second uses computer simulations based on the score statistic. These approaches were applied to data from a cohort study of workers at the Hanford site (1944-86) exposed occupationally to external radiation; to combined data on workers exposed at Hanford, Oak Ridge National Laboratory, and Rocky Flats Weapons plant; and to artificial data sets created to examine the effects of varying sample size and the magnitude of the risk estimate. For the worker data, sampling uncertainty dominated and accounting for uncertainty in systematic bias did not greatly modify confidence limits. However, with increased sample size, accounting for these uncertainties became more important, and is recommended when there is interest in comparing or combining results from different studies.

  6. Trace residue analysis: Chemometric estimations of sampling, amount, and error

    SciTech Connect

    Kurtz, D.A.

    1985-01-01

    This book explains the use of chemometrics to analyze trace residue of pesticides and environmental contaminants. Discusses the use of statistical methods to determine calibration graphs and their error bounds. Also looks at the limits of detection and at the selection and number of samples. Defines the computer methods used to sort, classify, display, and interpret data.

  7. Optimal error estimates of finite difference methods for the Gross ...

    E-print Network

    Weizhu Bao; Yongyong Cai

    2012-10-11

    Jun 20, 2012 ... method, the key technique in the analysis for the SIFD method is to use the .... For the analysis of splitting error of the time-splitting or split-step ...... and F. D. Tappert, Applications of the split-step Fourier method to the nu-.

  8. Improved estimates of the range of errors on photomasks using measured values of skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Hamaker, Henry Chris

    1995-12-01

    Statistical process control (SPC) techniques often use six times the standard deviation sigma to estimate the range of errors within a process. Two assumptions are inherent in this choice of metric for the range: (1) the normal distribution adequately describes the errors, and (2) the fraction of errors falling within plus or minus 3 sigma, about 99.73%, is sufficiently large that we may consider the fraction occurring outside this range to be negligible. In state-of-the-art photomasks, however, the assumption of normality frequently breaks down, and consequently plus or minus 3 sigma is not a good estimate of the range of errors. In this study, we show that improved estimates for the effective maximum error Em, which is defined as the value for which 99.73% of all errors fall within plus or minus Em of the mean mu, may be obtained by quantifying the deviation from normality of the error distributions using the skewness and kurtosis of the error sampling. Data are presented indicating that in laser reticle- writing tools, Em less than or equal to 3 sigma. We also extend this technique for estimating the range of errors to specifications that are usually described by mu plus 3 sigma. The implications for SPC are examined.

  9. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  10. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  11. LiDAR error estimation with WAsP engineering

    NASA Astrophysics Data System (ADS)

    Bingöl, F.; Mann, J.; Foussekis, D.

    2008-05-01

    The LiDAR measurements, vertical wind profile in any height between 10 to 150m, are based on assumption that the measured wind is a product of a homogenous wind. In reality there are many factors affecting the wind on each measurement point which the terrain plays the main role. To model LiDAR measurements and predict possible error in different wind directions for a certain terrain we have analyzed two experiment data sets from Greece. In both sites LiDAR and met, mast data have been collected and the same conditions are simulated with RisØ/DTU software, WAsP Engineering 2.0. Finally measurement data is compared with the model results. The model results are acceptable and very close for one site while the more complex one is returning higher errors at higher positions and in some wind directions.

  12. Estimating extreme flood events - assumptions, uncertainty and error

    NASA Astrophysics Data System (ADS)

    Franks, S. W.; White, C. J.; Gensen, M.

    2015-06-01

    Hydrological extremes are amongst the most devastating forms of natural disasters both in terms of lives lost and socio-economic impacts. There is consequently an imperative to robustly estimate the frequency and magnitude of hydrological extremes. Traditionally, engineers have employed purely statistical approaches to the estimation of flood risk. For example, for an observed hydrological timeseries, each annual maximum flood is extracted and a frequency distribution is fit to these data. The fitted distribution is then extrapolated to provide an estimate of the required design risk (i.e. the 1% Annual Exceedance Probability - AEP). Such traditional approaches are overly simplistic in that risk is implicitly assumed to be static, in other words, that climatological processes are assumed to be randomly distributed in time. In this study, flood risk estimates are evaluated with regards to traditional statistical approaches as well as Pacific Decadal Oscillation (PDO)/El Niño-Southern Oscillation (ENSO) conditional estimates for a flood-prone catchment in eastern Australia. A paleo-reconstruction of pre-instrumental PDO/ENSO occurrence is then employed to estimate uncertainty associated with the estimation of the 1% AEP flood. The results indicate a significant underestimation of the uncertainty associated with extreme flood events when employing the traditional engineering estimates.

  13. Rigorous bounding of position error estimates for aircraft surface movement

    Microsoft Academic Search

    K. O'Brien; J. Rife

    2010-01-01

    NextGen will require new navigation and surveillance capabilities to support safe and efficient surface operations based on tightly-coordinated 4D trajectories. In developing these new technologies, such as Automatic Dependent Surveillance-Broadcast (ADS-B) and the Ground Based Augmentation System (GBAS), it is essential to remember that all sensing technologies are prone to rare but potentially hazardous errors. Accordingly, the development of new

  14. Dust-Induced Systematic Errors in Ultraviolet-Derived Star Formation Rates

    E-print Network

    Eric F. Bell

    2002-05-24

    Rest-frame far-ultraviolet (FUV) luminosities form the `backbone' of our understanding of star formation at all cosmic epochs. These luminosities are typically corrected for dust by assuming that the tight relationship between the UV spectral slopes and the FUV attenuations of starburst galaxies applies for all star-forming galaxies. Data from seven independent UV experiments demonstrates that quiescent, `normal' star-forming galaxies deviate substantially from the starburst galaxy spectral slope-attenuation correlation, in the sense that normal galaxies are redder than starbursts. Spatially resolved data for the Large Magellanic Cloud suggests that dust geometry and properties, coupled with a small contribution from older stellar populations, cause deviations from the starburst galaxy spectral slope-attenuation correlation. Folding in data for starbursts and ultra-luminous infrared galaxies, it is clear that neither rest-frame UV-optical colors nor UV/H-alpha significantly help in constraining the UV attenuation. These results argue that the estimation of SF rates from rest-frame UV and optical data alone is subject to large (factors of at least a few) systematic uncertainties because of dust, which cannot be reliably corrected for using only UV/optical diagnostics.

  15. CALIBRATION OF AND ATTITUDE ERROR ESTIMATION FOR A SPACEBORNE SCATTEROMETER USING MEASUREMENTS OVER LAND

    E-print Network

    Long, David G.

    for calibra- tion validation of an on-line spaceborne radar system. The technique is extended to estimateCALIBRATION OF AND ATTITUDE ERROR ESTIMATION FOR A SPACEBORNE SCATTEROMETER USING MEASUREMENTS OVER was to measure radar backscatter over the world's oceans. These measurements are used to generate estimates

  16. Systematic and large temperature errors in a dynamic downscaling of atmospheric flow

    NASA Astrophysics Data System (ADS)

    Massad, Andréa; Ólafsson, Haraldur; Nína Petersen, Guðrún; Ágústsson, Hálfdán; Rögnvaldsson, Ólafur

    2015-04-01

    Years of atmospheric flow over Iceland have been simulated with the WRF model, using boundaries from the ECMWF. In general, the flow is well reproduced, but there are still errors. By comparison with a multitude of observations, the largest errors have been analysed in terms of the physical or numerical processes that appear to go wrong. Many of the largest temperature errors are associated with an incorrect representation of the surface of the earth during the snow melting season. Another characteristic of large errors is the presence of misplaced large horizontal temperature gradients in coastal areas. Wrong vertical mixing gave surprisingly few large errors. There are some errors due to incorrect timing of incoming weather systems at the boundaries, but no large errors can be traced to wrongly reproduced temperature of airmasses advected into the area.

  17. Approximate estimates of limiting errors of passive wireless SAW sensing with DPM.

    PubMed

    Shmaliy, Yuriy S; Ibarra-Manzano, Oscar; Andrade-Lucio, Jose; Rojas-Laguna, Roberto

    2005-10-01

    This paper discusses approximate statistical estimates of limiting errors associated with single differential phase measurement of a time delay (phase difference) between two reflectors of the passive surface acoustic wave (SAW) sensor. The remote wireless measurement is provided at the ideal coherent receiver using the maximum likelihood function approach. Approximate estimates of the mean error, mean square error, estimate variance, and Cramér-Rao bound are derived along with the error probability to exceed a threshold in a wide range of signal-to-noise ratio (SNR) values. The von Mises/Tikhonov distribution is used as an approximation for the phase difference and differential phase diversity. Simulation of the random phase difference and limiting errors also is applied. PMID:16382631

  18. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  19. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  20. Chemometric investigation of systematic error in the analysis of biological materials by flame and electrothermal atomic absorption spectrometry

    Microsoft Academic Search

    Antonio Moreda-Piñeiro; Pilar Bermejo-Barrera; Adela Bermejo-Barrera

    2006-01-01

    Systematic errors observed when using flame atomic absorption spectrometry (FAAS) and electrothermal atomic spectrometry (ETAAS) for the analysis of biological solid materials (seafood products) were evaluated. The effect of the sample pre-treatment method (microwave-assisted acid digestion, ultrasound-assisted acid leaching and slurry sampling) as well as the number of times that a certain pre-treatment process is repeated, were two factors evaluated.

  1. Design of a steam generator water level controller via the estimation of the flow errors

    Microsoft Academic Search

    Man Gyun Na

    1995-01-01

    The problem of the water level control of the steam generator makes the plant more vulnerable to high and low level trips. Particularly, the swell and shrink phenomena and the flow errors at low power levels make the level control of steam generators difficult. At this work, the flow errors are regarded as the time-varying parameters and estimated by the

  2. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    ERIC Educational Resources Information Center

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  3. Towards an ocean salinity error budget estimation within the SMOS mission

    Microsoft Academic Search

    Roberto Sabia; Adriano Camps; Mercè Vall-llossera; Marco Talone

    2007-01-01

    The SMOS (Soil Moisture and Ocean Salinity) mission will provide from 2008 onwards global sea surface salinity estimations over the oceans. This work summarizes several insights gathered in the framework of salinity retrieval studies, aimed to address an overall salinity error budget. The paper covers issues ranging from the impact of auxiliary data on SSS error to the potential exploitation

  4. Estimating the approximation error when fixing unessential factors in global sensitivity analysis

    Microsoft Academic Search

    I. M. Sobol’; S. Tarantola; D. Gatelli; S. S. Kucherenko; W. Mauntz

    2007-01-01

    One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential

  5. Identification and estimation of nonlinear models with misclassification error using instrumental variables: A general solution

    Microsoft Academic Search

    Yingyao Hu

    2008-01-01

    This paper provides a general solution to the problem of identification and estimation of nonlinear models with misclassification error in a general discrete explanatory variable using instrumental variables. The misclassification error is allowed to be correlated with all the explanatory variables in the model. It is not enough to identify the model by simply generalizing the identification in the binary

  6. RELATING ERROR BOUNDS FOR MAXIMUM CONCENTRATION ESTIMATES TO DIFFUSION METEOROLOGY UNCERTAINTY (JOURNAL VERSION)

    EPA Science Inventory

    The paper relates the magnitude of the error bounds of data, used as inputs to a Gaussian dispersion model, to the magnitude of the error bounds of the model output. The research addresses the uncertainty in estimating the maximum concentrations from elevated buoyant sources duri...

  7. Use of atmospheric emission to estimate refractive errors in a non-horizontally stratified troposphere

    Microsoft Academic Search

    M. A. Gallop; L. E. TeIford

    1975-01-01

    Tropospheric refraction introduces errors into radar and radio communication systems by causing radio waves to travel along a curved path and at a speed which changes with position. Common error correction techniques, such as making estimates of refractive effects from surface refractivity, rely implicitly on the assumption that the troposphere is horizontally stratified. This study demonstrates that such an assumption

  8. MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

    2011-01-01

    MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

  9. Refined error estimates for matrix-valued radial basis functions 

    E-print Network

    Fuselier, Edward J., Jr.

    2007-09-17

    characterization of the native space, derive improved stability estimates for the interpolation matrix, and give divergence-free interpolation and approximation results for band-limited functions. Furthermore, we introduce a new class of matrix-valued RBFs that can...

  10. Estimating True Score in the Compound Binomial Error Model

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1978-01-01

    Several Bayesian approaches to the simultaneous estimation of the means of k binomial populations are discussed. This has particular applicability to criterion-referenced or mastery testing. (Author/JKS)

  11. Nonlinear bounded-error target state estimation using redundant states

    Microsoft Academic Search

    James Anthony Covello

    2006-01-01

    When the primary measurement sensor is passive in nature---by which we mean that it does not directly measure range or range rate---there are well-documented challenges for target state estimation. Most estimation schemes rely on variations of the Extended Kalman Filter (EKF), which, in certain situations, suffer from divergence and\\/or covariance collapse. For this and other reasons, we believe that the

  12. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    E-print Network

    Locatelli, R.

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...

  13. An improved error estimate for reduced-order models of discrete-time systems

    Microsoft Academic Search

    D. Hinrichsen; A. J. Pritchard

    1990-01-01

    The authors derive, under weaker conditions, a discrete-time counterpart of K. Glover's (1984) error estimate for reduced-order models. Following the same lines, a simple proof of Glover's result for continuous-time systems can be given

  14. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  15. Lower bounds on the estimation error in problems of distributed computation

    E-print Network

    Como, Giacomo

    Information-theoretic lower bounds on the estimation error are derived for problems of distributed computation. These bounds hold for a network attempting to compute a real-vector-valued function of the global information, ...

  16. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  17. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  18. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-04-01

    A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%). At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane fluxes.

  19. Estimation of Error in Western Pacific Geoid Heights Derived from Gravity Data Only

    NASA Astrophysics Data System (ADS)

    Peters, M. F.; Brozena, J. M.

    2012-12-01

    The goal of the Western Pacific Geoid estimation project was to generate geoid height models for regions in the Western Pacific Ocean, and formal error estimates for those geoid heights, using all available gravity data and statistical parameters of the quality of the gravity data. Geoid heights were to be determined solely from gravity measurements, as a gravimetric geoid model and error estimates for that model would have applications in oceanography and satellite altimetry. The general method was to remove the gravity field associated with a "lower" order spherical harmonic global gravity model from the regional gravity set; to fit a covariance model to the residual gravity, and then calculate the (residual) geoid heights and error estimates by least-squares collocation fit with residual gravity, available statistical estimates of the gravity and the covariance model. The geoid heights corresponding to the lower order spherical harmonic model can be added back to the heights from the residual gravity to produce a complete geoid height model. As input we requested from NGA all unclassified available gravity data in the western Pacific between 15° to 45° N and 105° to 141°W. The total data set that was used to model and estimate errors in gravimetric geoid comprised an unclassified, open file data set (540,012 stations), a proprietary airborne survey of Taiwan (19,234 stations), and unclassified NAVO SSP survey data (95,111 stations), for official use only. Various programs were adapted to the problem including N.K. Pavlis' HSYNTH program and the covariance fit program GPFIT and least-squares collocation program GPCOL from the GRAVSOFT package (Forsberg and Schering, 2008 version) which were modified to handle larger data sets, but in some regions data were still too numerous. Formulas were derived that could be used to block-mean the data in a statistically optimal sense and still retain the error estimates required for the collocation algorithm. Running the covariance fit and collocation on discrete blocks revealed an edge effect on the covariance parameter calculation that produced stepwise discontinuities in the error estimates. To eliminate this, the covariance estimation procedure program was modified to slide along a lattice or grid (defined at runtime) of points, selecting all stations closer than a user defined distance with an error estimate of 5 mGals standard deviation or better from the larger regional data set, and calculating covariance parameters for that location. The collocation program was modified to use these locations and GPFIT parameters, and to select all stations within a close radius, and block mean data with associated error estimates beyond that, to calculate a residual height and error estimates on a grid centered at the covariance fit location. These grids were combined to produce the overall geoid height and error estimate sets. The error estimates, in meters, are plotted as a color-filled contour map masked by land regions. Lack of gravity data causes the area of high estimated error east of the Korean peninsula. The high estimates of error north-west of Taiwan are due not to a lack of data, but rather data with high internal estimates of measurement error or disagreement between different data sets. The tracking visible is the effect of high quality data to reduce errors in gravimetric geoid height models.

  20. Effective stress-based finite element error estimation for composite bodies

    Microsoft Academic Search

    S. K. Choudhary; I. R. Grosse

    1995-01-01

    This paper presents a discretization error estimator for displacement-based finite element analysis applicable to multi-material bodies such as composites. The proposed method applies a specific stress continuity requirement across the intermaterial boundary consistent with physical principles. This approach estimates the discretization error by comparing the discontinuous finite element effective stress function with a smoothed (C0 continuous) effective stress function for

  1. Local error estimates for moderately smooth problems: Part II—SDEs and SDAEs with small noise

    Microsoft Academic Search

    Thorsten Sickenberger; Ewa Weinmüller; Renate Winkler

    2009-01-01

    The paper consists of two parts. In the first part of the paper, we proposed a procedure to estimate local errors of low order\\u000a methods applied to solve initial value problems in ordinary differential equations (ODEs) and index-1 differential-algebraic\\u000a equations (DAEs). Based on the idea of Defect Correction we developed local error estimates for the case when the problem\\u000a data

  2. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  3. A posteriori error estimation with finite element methods of lines for one-dimensional parabolic systems

    Microsoft Academic Search

    Slimane Adjerid; Joseph E. Flaherty; Yun J. Wang

    1993-01-01

    Summary Consider the solution of one-dimensional linear initial-boundary value problems by a finite element method of lines using a piecewisePth-degree polynomial basis. A posteriori estimates of the discretization error are obtained as the solutions of either local parabolic or local elliptic finite element problems using piecewise polynomial corrections of degreep+1 that vanish at element ends. Error estimates computed in this

  4. Robust a posteriori error estimators for a singularly perturbed reaction-diffusion equation

    Microsoft Academic Search

    R. Verfürth

    1998-01-01

    .   We derive robust a posteriori error estimators for a singularly perturbed reaction-diffusion equation. Here, robust means\\u000a that the estimators yield global upper and local lower bounds on the error measured in the energy norm such that the ratio\\u000a of the upper and lower bounds is bounded from below and from above by constants which do neither depend on any

  5. Soft error rate estimation and mitigation for SRAM-based FPGAs

    Microsoft Academic Search

    Ghazanfar Asadi; Mehdi Baradaran Tahoori

    2005-01-01

    FPGA-based designs are more susceptible to single-event upsets (SEUs) compared to ASIC designs. Soft error rate (SER) estimation is a crucial step in the design of soft error tolerant schemes to balance reliability, performance, and cost of the system. Previous techniques on FPGA SER estimation are based on time-consuming fault injection and simulation methods.In this paper, we present an analytical

  6. Improved estimates of coordinate error for molecular replacement

    SciTech Connect

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J., E-mail: rjr27@cam.ac.uk [University of Cambridge, Hills Road, Cambridge CB2 0XY (United Kingdom)

    2013-11-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

  7. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    SciTech Connect

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C. [Computer Vision Laboratory, ETH Zurich, 8092 Zurich (Switzerland); Division of Radiation Medicine, Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); Computer Vision Laboratory, ETH Zurich, 8092 Zurich (Switzerland)

    2007-09-15

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift.

  8. Systematic errors in respiratory gating due to intrafraction deformations of the liver.

    PubMed

    von Siebenthal, Martin; Székely, Gábor; Lomax, Antony J; Cattin, Philippe C

    2007-09-01

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift. PMID:17926966

  9. Anisotropic Mesh Adaptation for Solution of Finite Element Problems Using Hierarchical Edge-Based Error Estimates

    Microsoft Academic Search

    Abdellatif Agouzal; Konstantin Lipnikov; Yuri Vassilevski

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients.\\u000a The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N\\u000a \\u000a h\\u000a triangles, the error is proportional to Nh-<\\/font\\u000a>1N_h^{-1} and the gradient of error is proportional to Nh-<\\/font\\u000a>1\\/2N_h^{-1\\/2} which are the

  10. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  11. Error Estimates in Horocycle Averages Asymptotics: Challenges from String Theory

    E-print Network

    Matteo A. Cardella

    2011-05-12

    There is an intriguing connection between the dynamics of the horocycle flow in the modular surface $SL_{2}(\\pmb{Z}) \\backslash SL_{2}(\\pmb{R})$ and the Riemann hypothesis. It appears in the error term for the asymptotic of the horocycle average of a modular function of rapid decay. We study whether similar results occur for a broader class of modular functions, including functions of polynomial growth, and of exponential growth at the cusp. Hints on their long horocycle average are derived by translating the horocycle flow dynamical problem in string theory language. Results are then proved by designing an unfolding trick involving a Theta series, related to the spectral Eisenstein series by Mellin integral transform. We discuss how the string theory point of view leads to an interesting open question, regarding the behavior of long horocycle averages of a certain class of automorphic forms of exponential growth at the cusp.

  12. Gap filling strategies and error in estimating annual soil respiration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

  13. Estimation and implications of random errors in whole-body dosimetry for targeted radionuclide therapy

    NASA Astrophysics Data System (ADS)

    Flux, Glenn D.; Guy, Matthew J.; Beddows, Ruth; Pryor, Matthew; Flower, Maggie A.

    2002-09-01

    For targeted radionuclide therapy, the level of activity to be administered is often determined from whole-body dosimetry performed on a pre-therapy tracer study. The largest potential source of error in this method is due to inconsistent or inaccurate activity retention measurements. The main aim of this study was to develop a simple method to quantify the uncertainty in the absorbed dose due to these inaccuracies. A secondary aim was to assess the effect of error propagation from the results of the tracer study to predictive absorbed dose estimates for the therapy as a result of using different radionuclides for each. Standard error analysis was applied to the MIRD schema for absorbed dose calculations. An equation was derived to describe the uncertainty in the absorbed dose estimate due solely to random errors in activity-time data, requiring only these data as input. Two illustrative examples are given. It is also shown that any errors present in the dosimetry calculations following the tracer study will propagate to errors in predictions made for the therapy study according to the ratio of the respective effective half-lives. If the therapy isotope has a much longer physical half-life than the tracer isotope (as is the case, for example, when using 123I as a tracer for 131I therapy) the propagation of errors can be significant. The equations derived provide a simple means to estimate two potentially large sources of error in whole-body absorbed dose calculations.

  14. A mission to test the Pioneer anomaly: estimating the main systematic effects

    E-print Network

    O. Bertolami; J. Paramos

    2007-06-20

    We estimate the main systematic effects relevant in a mission to test and characterize the Pioneer anomaly through the flight formation concept, by launching probing spheres from a mother spacecraft and tracking their motion via laser ranging.

  15. Mesoscale predictability and background error convariance estimation through ensemble forecasting

    E-print Network

    Ham, Joy L

    2002-01-01

    of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIFNCE Approved as to style and content by: Fuqing Zhang (Chair of Committee) Derya Guven Akleman (Member) John Nielsen-Gammon (Member) Chris Snyder... University Chair of Advisory Commtttee: Dr. Fuqing Zhang Over the past decade, ensemble forecasting has emerged as a powerful tool for numerical weather prediction. Not only does it produce the best estimate of the state of the atmosphere, it also could...

  16. Evaluation of the systematic error in using 3D dose calculation in scanning beam proton therapy for lung cancer.

    PubMed

    Li, Heng; Liu, Wei; Park, Peter; Matney, Jason; Liao, Zhongxing; Chang, Joe; Zhang, Xiaodong; Li, Yupeng; Zhu, Ronald X

    2014-01-01

    The objective of this study was to evaluate and understand the systematic error between the planned three-dimensional (3D) dose and the delivered dose to patient in scanning beam proton therapy for lung tumors. Single-field and multifield optimized scanning beam proton therapy plans were generated for ten patients with stage II-III lung cancer with a mix of tumor motion and size. 3D doses in CT datasets for different respiratory phases and the time-weighted average CT, as well as the four-dimensional (4D) doses were computed for both plans. The 3D and 4D dose differences for the targets and different organs at risk were compared using dose-volume histogram (DVH) and voxel-based techniques, and correlated with the extent of tumor motion. The gross tumor volume (GTV) dose was maintained in all 3D and 4D doses, using the internal GTV override technique. The DVH and voxel-based techniques are highly correlated. The mean dose error and the standard deviation of dose error for all target volumes were both less than 1.5% for all but one patient. However, the point dose difference between the 3D and 4D doses was up to 6% for the GTV and greater than 10% for the clinical and planning target volumes. Changes in the 4D and 3D doses were not correlated with tumor motion. The planning technique (single-field or multifield optimized) did not affect the observed systematic error. In conclusion, the dose error in 3D dose calculation varies from patient to patient and does not correlate with lung tumor motion. Therefore, patient-specific evaluation of the 4D dose is important for scanning beam proton therapy for lung tumors. PMID:25207565

  17. Evaluation of the systematic error in using 3D dose calculation in scanning beam proton therapy for lung cancer

    PubMed Central

    Li, Heng; Liu, Wei; Park, Peter; Matney, Jason; Liao, Zhongxing; Chang, Joe; Zhang, Xiaodong; Li, Yupeng; Zhu, Ronald X

    2014-01-01

    The objective of this study was to evaluate and understand the systematic error between the planned three-dimensional (3D) dose and the delivered dose to patient in scanning beam proton therapy for lung tumors. Single-field and multi-field optimized scanning beam proton therapy plans were generated for 10 patients with stage II–III lung cancer with a mix of tumor motion and size. 3D doses in CT data sets for different respiratory phases and the time weighted average CT, as well as the four-dimensional (4D) doses were computed for both plans. The 3D and 4D dose differences for the targets and different organs at risk were compared using dose volume histogram (DVH) and voxel-based techniques and correlated with the extent of tumor motion. The gross tumor volume (GTV) dose was maintained in all 3D and 4D doses using the internal GTV override technique. The DVH and voxel-based techniques are highly correlated. The mean dose error and the standard deviation of dose error for all target volumes were both less than 1.5% for all but one patient. However, the point dose difference between the 3D and 4D doses was up to 6% for the GTV and greater than 10% for the clinical and planning target volumes. Changes in the 4D and 3D doses were not correlated with tumor motion. The planning technique (single-field or multi-field optimized) did not affect the observed systematic error. In conclusion, the dose error in 3D dose calculation varies from patient to patient and does not correlate with lung tumor motion. Therefore, patient-specific evaluation of the 4D dose is important for scanning beam proton therapy for lung tumors. PMID:25207565

  18. Sampling Errors of SSM\\/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Simple Model

    Microsoft Academic Search

    THOMAS L. BELL; P RASUN K. KUNDU; CHRISTIAN D. KUMMEROW

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote sensing error and, especially in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A

  19. Sampling Errors of SSM\\/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Simple Model

    Microsoft Academic Search

    Thomas L. Bell; Prasun K. Kundu; Christian D. Kummerow

    2001-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote sensing error and, especially in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A

  20. Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip Arthroplasty

    E-print Network

    St Andrews, University of

    Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip Arthroplasty Stuart Kerrigana, Stephen J. McKennab , Ian W. Rickettsb and Carlos Wigderowitzc a 9SY Abstract. Acetabular wear of total hip replacements can be estimated from radiographs based

  1. Error estimation of a neuro-fuzzy predictor for prognostic purpose

    E-print Network

    Paris-Sud XI, Université de

    Error estimation of a neuro-fuzzy predictor for prognostic purpose Mohamed El-Koujok, Rafael of the evolving eXtended Tagaki-Sugeno system as a neuro- fuzzy predictor. A method to estimate the probability to online applications. Keywords: Prognostic; prediction of degradation; confidence interval; neuro

  2. Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics 

    E-print Network

    Zollanvari, Amin

    2012-02-14

    distribution, and also sample-based methods to estimate upper conditional bounds. In the second part of this dissertation, exact analytical expressions for the bias, variance, and Root Mean Square (RMS) for the resubstitution and leave-one-out error estimators...

  3. Techniques for Estimating the Bit Error Rate in the Simulation of Digital Communication Systems

    Microsoft Academic Search

    M. Jeruchim

    1984-01-01

    Computer simulation is often used to estimate the bit error rate (BER) performance of digital communication systems. There are a number of distinct techniques in the simulation context that can be used to construct this estimate. A tutorial exposition of such techniques is provided, with particular reference to five specific methods which can be implemented in a simulation. These methods

  4. A posteriori error estimations of a SUPG method for anisotropic diffusion–convection–reaction problems

    Microsoft Academic Search

    Thomas Apel; Serge Nicaise

    2007-01-01

    This Note presents an a posteriori residual error estimator for diffusion–convection–reaction problems approximated by a SUPG scheme on isotropic or anisotropic meshes in Rd, d=2 or 3. This estimator is based on the jump of the flux and the interior residual of the approximated solution. It is constructed to work on anisotropic meshes which account for the eventual anisotropic behavior

  5. Error Bounds for Joint Detection and Estimation of a Single Object with Random Finite Set Observation

    E-print Network

    Rezaeian, Mohammad-Jafar

    Error Bounds for Joint Detection and Estimation of a Single Object with Random Finite Set estimation framework, the collection of measurement is treated as a realization of a random finite set of a state observed as a realization of a random finite set. Our result is a generalization of the Cram

  6. Error Bounds and Improved Probability Estimation using the Maximum Likelihood Set

    E-print Network

    Likelihood Set (HLS), with 0 slowly in sample size, ensures that the HLS contains the data- generating. In particular, the HLS provides a "high-probability" bound on the estimation error, and experimental results in statistical language modeling show improved operational performance from HLS-based estimates. I. PROBABILITY

  7. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  8. The Impact of Channel Estimation Errors on Space Time Block Codes

    E-print Network

    Valenti, Matthew C.

    The system model that we use to analyze the performance of space-time block codes with channel estimationThe Impact of Channel Estimation Errors on Space Time Block Codes Dirk Baker and Matthew C. Valenti@csee.wvu.edu, mvalenti@wvu.edu Abstract In this paper, we demonstrate the perform- ance of space-time block codes when

  9. Error estimates for results of nonstationary noise analysis derived with linear least squares methods

    Microsoft Academic Search

    Rüdiger Steffan; Stefan H. Heinemann

    1997-01-01

    Nonstationary noise analysis of electrophysiological data is applied to the estimation of the single-channel current, i, and the number of active channels, NC, whenever they cannot be determined directly due to limited resolution. Using least squares methods, the accuracy of estimating i and NC chiefly depends on the statistical error of the ensemble variance. It is shown that if the

  10. Patients' willingness and ability to participate actively in the reduction of clinical errors: a systematic literature review.

    PubMed

    Doherty, Carole; Stavropoulou, Charitini

    2012-07-01

    This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799

  11. Systematic evaluation of errors occurring during the preparation of intravenous medication

    PubMed Central

    Parshuram, Christopher S.; To, Teresa; Seto, Winnie; Trope, Angela; Koren, Gideon; Laupacis, Andreas

    2008-01-01

    Introduction Errors in the concentration of intravenous medications are not uncommon. We evaluated steps in the infusion-preparation process to identify factors associated with preventable medication errors. Methods We included 118 health care professionals who would be involved in the preparation of intravenous medication infusions as part of their regular clinical activities. Participants performed 5 infusion-preparation tasks (drug-volume calculation, rounding, volume measurement, dose-volume calculation, mixing) and prepared 4 morphine infusions to specified concentrations. The primary outcome was the occurrence of error (deviation of > 5% for volume measurement and > 10% for other measures). The secondary outcome was the magnitude of error. Results Participants performed 1180 drug-volume calculations, 1180 rounding calculations and made 1767 syringe-volume measurements, and they prepared 464 morphine infusions. We detected errors in 58 (4.9%, 95% confidence interval [CI] 3.7% to 6.2%) drug-volume calculations, 30 (2.5%, 95% CI 1.6% to 3.4%) rounding calculations and 29 (1.6%, 95% CI 1.1% to 2.2%) volume measurements. We found 7 errors (1.6%, 95% CI 0.4% to 2.7%) in drug mixing. Of the 464 infusion preparations, 161 (34.7%, 95% CI 30.4% to 39%) contained concentration errors. Calculator use was associated with fewer errors in dose-volume calculations (4% v. 10%, p = 0.001). Four factors were positively associated with the occurence of a concentration error: fewer infusions prepared in the previous week (p = 0.007), increased number of years of professional experience (p = 0.01), the use of the more concentrated stock solution (p < 0.001) and the preparation of smaller dose volumes (p < 0.001). Larger magnitude errors were associated with fewer hours of sleep in the previous 24 hours (p = 0.02), the use of more concentrated solutions (p < 0.001) and preparation of smaller infusion doses (p < 0.001). Interpretation Our data suggest that the reduction of provider fatigue and production of pediatric-strength solutions or industry-prepared infusions may reduce medication errors. PMID:18166730

  12. Improved robust Bayes estimators of the error variance in linear models

    E-print Network

    Maruyama, Yuzo

    2010-01-01

    We consider the problem of estimating the error variance in a general linear model when the error distribution is assumed to be spherically symmetric, but not necessary Gaussian. In particular we study the case of a scale mixture of Gaussians including the particularly important case of the multivariate-t distribution. Under Stein's loss, we construct a class of estimators that improve on the usual best unbiased (and best equivariant) estimator. Our class has the interesting double robustness property of being simultaneously generalized Bayes (for the same generalized prior) and minimax over the entire class of scale mixture of Gaussian distributions.

  13. On the Robustness to Gene Tree Estimation Error (or lack thereof) of Coalescent-Based Species Tree Methods.

    PubMed

    Roch, Sebastien; Warnow, Tandy

    2015-07-01

    The estimation of species trees using multiple loci has become increasingly common. Because different loci can have different phylogenetic histories (reflected in different gene tree topologies) for multiple biological causes, new approaches to species tree estimation have been developed that take gene tree heterogeneity into account. Among these multiple causes, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is potentially the most common cause of gene tree heterogeneity, and much of the focus of the recent literature has been on how to estimate species trees in the presence of ILS. Despite progress in developing statistically consistent techniques for estimating species trees when gene trees can differ due to ILS, there is substantial controversy in the systematics community as to whether to use the new coalescent-based methods or the traditional concatenation methods. One of the key issues that has been raised is understanding the impact of gene tree estimation error on coalescent-based methods that operate by combining gene trees. Here we explore the mathematical guarantees of coalescent-based methods when analyzing estimated rather than true gene trees. Our results provide some insight into the differences between promise of coalescent-based methods in theory and their performance in practice. PMID:25813358

  14. Preventing errors when estimating single channel properties from the analysis of current fluctuations.

    PubMed Central

    Silberberg, S D; Magleby, K L

    1993-01-01

    The conductance, number, and mean open time of ion channels can be estimated from fluctuations in membrane current. To examine potential errors associated with fluctuation analysis, we simulated ensemble currents and estimated single channel properties. The number (N) and amplitude (i) of the underlying single channels were estimated using nonstationary fluctuation analysis, while mean open time was estimated using covariance and spectral analysis. Both excessive filtering and the analysis of segments of current that were too brief led to underestimates of i and overestimates of N. Setting the low-pass cut-off frequency of the filter to greater than five times the inverse of the effective mean channel open time (burst duration) and analyzing segments of current that were at least 80 times the effective mean channel open time reduced the errors to < 2%. With excessive filtering, Butterworth filtering gave up to 10% less error in estimating i and N than Bessel filtering. Estimates of mean open time obtained from the time constant of decay of the covariance, tau obs, at low open probabilities (Po) were much less sensitive to filtering than estimates of i and N. Extrapolating plots of tau obs versus mean current to the ordinate provided a method to estimate mean open time from data obtained at higher Po, where tau obs no longer represents mean open time. Bessel filtering gave the least error when estimating tau obs from the decay of the covariance function, and Butterworth filtering gave the least error when estimating tau obs from spectral density functions. PMID:7506065

  15. A Study on Estimating the Aiming Angle Error of Millimeter Wave Radar for Automobile

    NASA Astrophysics Data System (ADS)

    Kuroda, Hiroshi; Okai, Fumihiko; Takano, Kazuaki

    The 76GHz millimeter wave radar has been developed for automotive application such as ACC (Adaptive Cruise Control) and CWS (Collision Warning System). The radar is FSK (Frequency Shift Keying) monopulse type. The radar transmits 2 frequencies in time-duplex manner, and measures distance and relative speed of targets. The monopulse feature detects the azimuth angle of targets without a scanning mechanism. Conventionally a radar unit is aimed mechanically, although self-aiming capability, to detect and correct the aiming angle error automatically, has been required. The new algorithm, which estimates the aiming angle error and vehicle speed sensor error simultaneously, has been proposed and tested. The algorithm is based on the relationship of relative speed and azimuth angle of stationary objects, and the least squares method is used for calculation. The algorithm is applied to measured data of the millimeter wave radar, resulting in aiming angle estimation error of less than 0.6 degree.

  16. Use of inferential statistics to estimate error probability of video watermarks

    NASA Astrophysics Data System (ADS)

    Echizen, Isao; Yoshiura, Hiroshi; Fujii, Yasuhiro; Yamada, Takaaki; Tezuka, Satoru

    2005-03-01

    Errors in video watermark detection can cause serious problems, such as erroneous indication of illegal copying and erroneous copy control. These errors could not, however, be eliminated because watermarked pictures are subjected to wide varieties of image processing such as compression, resizing, filtering, or D/A or A/D conversion. Estimating errors of video watermarks is therefore an essential requirement for electric equipment that is to use copyright and copy-control information properly. This paper proposes a video watermarking method that estimates error probability from each watermarked frame at hand after image processing by using the expectation-maximization algorithm from inferential statistics. The paper also proposes a reliable detection system of video watermarks by using the proposed method. Experimental evaluations have shown that the new method can be used reliably with the margin factor and can be widely used in electric equipment as well as content-distribution systems.

  17. Potential Systematic Errors in Radio Occultation Climatologies due to Irregular Distributions of Apparent Outliers in the Retrieval Process

    NASA Astrophysics Data System (ADS)

    Schwarz, Jakob; Scherllin-Pirscher, Barbara; Foelsche, Ulrich; Kirchengast, Gottfried

    2013-04-01

    Monitoring global climate change requires measuring atmospheric parameters with sufficient coverage on the surface, but also in the free atmosphere. GPS Radio Occultation (RO) provides accurate and precise measurements in the Upper Troposphere-Lower Stratosphere region with global coverage and long-term stability thanks to a calibration inherent to the technique. These properties allow for the calculation of climatological variables of high quality to track small changes of these variables. High accuracy requires keeping systematic errors low. The purpose of this study is to examine the impact of the Quality Control (QC) mechanism applied in the retrieval system of the Wegener Center for Climate and Global Change, Karl-Franzens-University Graz (WEGC), on systematic errors of climatologies calculated from RO data. The current RO retrieval OPSv5.4 at the WEGC uses phase delay profiles and precise orbit information provided by other data centers, mostly by UCAR/CDAAC, Boulder, CO, USA for various receiver satellites. The satellites analyzed in this study are CHAMP, GRACE-A and FORMOSAT-3/COSMIC. Profiles of bending angles, refractivity and atmospheric parameters are retrieved and these are used to calculate climatologies. The OPSv5.4 QC rejects measurements if they do not fulfill certain quality criteria. If these criteria cause a biased rejection with regard to the spatial or temporal distribution of measurements it can increase the systematic component of the so-called Sampling Error (SE) in climatologies. The SE is a consequence of the discrete and finite number of RO measurements that do not completely resemble the total variability of atmospheric parameters. The results of the calculations conducted show that the QC of the retrieval system indeed has a strong influence on geographical sampling patterns, causing a large number of rejections at high latitudes in the respective winter hemisphere. During winter, a monthly average of up to 60 % of all measurements are discarded at high latitudes. The QC also influences temporal sampling patterns systematically, more measurements are rejected during nighttime. The systematic rejections by the QC also have a strong effect on the SE, causing it to increase fourfold in some cases and regions. Measurements of cold temperatures are particularly affected, in these cases derived climatologies are biased towards higher temperatures. The results and new insight gained are used to improve the QC of following processing system versions.

  18. Estimating the Impact of Classification Error on the “Statistical Accuracy” of Uniform Crime Reports

    Microsoft Academic Search

    James J. Nolan; Stephen M. Haas; Jessica S. Napier

    This paper offers a methodological approach for estimating classification error in police records then determining the statistical\\u000a accuracy of official crime statistics reported to the Uniform Crime Reporting (UCR) program. Classification error refers to\\u000a the mistakes in UCR statistics caused by the misclassification of criminal offenses, for example recording a crime as aggravated\\u000a assault when it should have been simple

  19. An Analytical Approach for Soft Error Rate Estimation of SRAM-Based FPGAs

    Microsoft Academic Search

    Ghazanfar Asadi; Mehdi B. Tahoori

    2004-01-01

    FPGA-based designs are more susceptible to single-event upsets (SEUs) compared to ASIC designs. Soft error rate (SER) estimation is a crucial step in design of soft error tol - erant schemes to balance reliability, performance, and cost of the system. Previous techniques on FPGA SER estima- tion are based on time-consuming fault injection and simu- lation methods. In this paper,

  20. Potential errors in the volume of distribution estimation of therapeutic proteins composed of differently cleared components

    Microsoft Academic Search

    Wolfgang F. Richter; Hans Peter Grimm; Frank-Peter Theil

    The volume of distribution at steady state (Vss) of therapeutic proteins is usually assessed by non-compartmental or compartmental pharmacokinetic (PK) analysis wherein\\u000a errors may arise due to the elimination of therapeutic proteins from peripheral tissues that are not in rapid equilibrium\\u000a with the sampling compartment (usually blood). Here we explored another potential source of error in the estimation of Vss

  1. A Systematic and Efficient Method to Estimate the Vibrational Frequencies of Linear Peptide

    E-print Network

    Kim, Myung Soo

    estimate the vibrational frequency sets of linear peptide and protein ions with any amino acid sequence of biological molecules such as proteins, nucleic acids, and carbohydrates [9­13]. To test whether the paradigmsA Systematic and Efficient Method to Estimate the Vibrational Frequencies of Linear Peptide

  2. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

    2013-10-01

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.

  3. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  4. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    NASA Technical Reports Server (NTRS)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  5. Errors in ultrasonic scatterer size estimates due to phase and amplitude aberration.

    PubMed

    Gerig, Anthony; Zagzebski, James

    2004-06-01

    Current ultrasonic scatterer size estimation methods assume that acoustic propagation is free of distortion due to large-scale variations in medium attenuation and sound speed. However, it has been demonstrated that under certain conditions in medical applications, medium inhomogeneities can cause significant field aberrations that lead to B-mode image artifacts. These same aberrations may be responsible for errors in size estimates and parametric images of scatterer size. This work derives theoretical expressions for the error in backscatter coefficient and size estimates as a function of statistical parameters that quantify phase and amplitude aberration, assuming a Gaussian spatial autocorrelation function. Results exhibit agreement with simulations for the limited region of parameter space considered. For large values of aberration decorrelation lengths relative to aberration standard deviations, phase aberration errors appear to be minimal, while amplitude aberration errors remain significant. Implications of the results for accurate backscatter and size estimation are discussed. In particular, backscatter filters are suggested as a method for error correction. Limitations of the theory are also addressed. The approach, approximations, and assumptions used in the derivation are most appropriate when the aberrating structures are relatively large, and the region containing the inhomogeneities is offset from the insonifying transducer. PMID:15237849

  6. Estimation and Propagation of Errors in Ice Sheet Bed Elevation Measurements

    NASA Astrophysics Data System (ADS)

    Johnson, J. V.; Brinkerhoff, D.; Nowicki, S.; Plummer, J.; Sack, K.

    2012-12-01

    This work is presented in two parts. In the first, we use a numerical inversion technique to determine a "mass conserving bed" (MCB) and estimate errors in interpolation of the bed elevation. The MCB inversion technique adjusts the bed elevation to assure that the mass flux determined from surface velocity measurements does not violate conservation. Cross validation of the MCB technique is done using a subset of available flight lines. The unused flight lines provide data to compare to, quantifying the errors produced by MCB and other interpolation methods. MCB errors are found to be similar to those produced with more conventional interpolation schemes, such as kriging. However, MCB interpolation is consistent with the physics that govern ice sheet models. In the second part, a numerical model of glacial ice is used to propagate errors in bed elevation to the kinematic surface boundary condition. Initially, a control run is completed to establish the surface velocity produced by the model. The control surface velocity is subsequently used as a target for data inversions performed on perturbed versions of the control bed. The perturbation of the bed represents the magnitude of error in bed measurement. Through the inversion for traction, errors in bed measurement are propagated forward to investigate errors in the evolution of the free surface. Our primary conclusion relates the magnitude of errors in the surface evolution to errors in the bed. By linking free surface errors back to the errors in bed interpolation found in the first part, we can suggest an optimal spacing of the radar flight lines used in bed acquisition.

  7. Error Analysis for Estimation of Trace Vapor Concentration Pathlength in Stack Plumes

    SciTech Connect

    Gallagher, Neal B.; Wise, Barry M.; Sheen, David M.

    2003-06-01

    Near infrared hpyerspectral imaging is finding utility in remote sensing applications such as detection and quantification of chemical vapor effluents in stack plumes. Optimizing the sensing system or quantification algorithms is difficult since reference images are rarely well characterized. The present work uses a radiance model for a down looking scene and a detailed noise model for a dispersive and Fourier transform spectrometer to generate well-characterized synthetic data. These data were used in conjunction with a classical least squares based estimation procedure in an error analysis to provide estimates of different sources of concentration-pathlength quantification error in the remote sensing problem.

  8. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve?

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  9. Assumption-free estimation of the genetic contribution to refractive error across childhood

    PubMed Central

    St Pourcain, Beate; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Williams, Cathy

    2015-01-01

    Purpose Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Methods Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). Results The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old. Conclusions Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population. PMID:26019481

  10. Dust-Induced Systematic Errors in UV-Derived Star Formation Rates

    E-print Network

    Eric F. Bell

    2002-07-18

    Rest-frame far-ultraviolet (FUV) luminosities form the `backbone' of our understanding of star formation at all cosmic epochs. FUV luminosities are typically corrected for dust by assuming that extinction indicators which have been calibrated for local starbursting galaxies apply to all star-forming galaxies. I present evidence that `normal' star-forming galaxies have systematically redder UV/optical colors than starbursting galaxies at a given FUV extinction. This is attributed to differences in star/dust geometry, coupled with a small contribution from older stellar populations. Folding in data for starbursts and ultra-luminous infrared galaxies, I conclude that SF rates from rest-frame UV and optical data alone are subject to large (factors of at least a few) systematic uncertainties because of dust, which cannot be reliably corrected for using only UV/optical diagnostics.

  11. Systematic reduction of sign errors in many-body calculations of atoms and molecules

    SciTech Connect

    Bajdich, Michal [ORNL; Tiago, Murilo L [ORNL; Hood, Randolph Q. [Lawrence Livermore National Laboratory (LLNL); Kent, Paul R [ORNL; Reboredo, Fernando A [ORNL

    2010-01-01

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B {\\bf 79} 195117 (2009), {\\it ibid.} {\\bf 80} 125110 (2009)] is applied to the calculation of ground state states of atoms and molecules. By direct comparison with accurate configuration interaction results we show that applying the SHDMC method to the oxygen atom leads to systematic convergence towards the exact ground state wave function. We present results for the small but challenging N$_2$ molecule, where results obtained via the energy minimization method and SHDMC are within experimental accuracy of 0.08 eV. Moreover, we demonstrate that the algorithm is robust enough to be used for the calculations of systems at least as large as C$_{20}$ starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the fermion sign problem in electronic structure calculations.

  12. Stacked Weak Lensing Mass Calibration: Estimators, Systematics, and Impact on Cosmological Parameter Constraints

    SciTech Connect

    Rozo, Eduardo; /U. Chicago /Chicago U., KICP; Wu, Hao-Yi; /KIPAC, Menlo Park; Schmidt, Fabian; /Caltech

    2011-11-04

    When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.

  13. Systematic Entomology (2012), 37, 287304 Divergence estimates and early evolutionary history

    E-print Network

    Hammerton, James

    2012-01-01

    Systematic Entomology (2012), 37, 287­304 Divergence estimates and early evolutionary history E L A H I . M O R I T A 2,3 and S I M O N V A N N O O R T 4,5 1 Systematic Entomology Lab, USDA, Washington, DC, U.S.A., 2 Department of Entomology, National Museum of Natural History, Smithsonian

  14. Certainty in Heisenberg's uncertainty principle: Revisiting definitions for estimation errors and disturbance

    NASA Astrophysics Data System (ADS)

    Dressel, Justin; Nori, Franco

    2014-02-01

    We revisit the definitions of error and disturbance recently used in error-disturbance inequalities derived by Ozawa and others by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on an implicit assumption that is generally incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-semidefinite joint quasiprobability distribution. For unbiased measurements, the error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to directly measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect distributional estimations can have nonzero error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that is appropriate for ensembles being probed by all outcomes of an apparatus. To reconnect with the discussion of Heisenberg, we suggest alternative definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation.

  15. A posteriori error estimates, stopping criteria, and adaptivity for multiphase compositional Darcy flows in porous media

    NASA Astrophysics Data System (ADS)

    Di Pietro, Daniele A.; Flauraud, Eric; Vohralík, Martin; Yousef, Soleiman

    2014-11-01

    In this paper we derive a posteriori error estimates for the compositional model of multiphase Darcy flow in porous media, consisting of a system of strongly coupled nonlinear unsteady partial differential and algebraic equations. We show how to control the dual norm of the residual augmented by a nonconformity evaluation term by fully computable estimators. We then decompose the estimators into the space, time, linearization, and algebraic error components. This allows to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver when the corresponding error components do not affect significantly the overall error. Moreover, the spatial and temporal error components can be balanced by time step and space mesh adaptation. Our analysis applies to a broad class of standard numerical methods, and is independent of the linearization and of the iterative algebraic solvers employed. We exemplify it for the two-point finite volume method with fully implicit Euler time stepping, the Newton linearization, and the GMRes algebraic solver. Numerical results on two real-life reservoir engineering examples confirm that significant computational gains can be achieved thanks to our adaptive stopping criteria, already on fixed meshes, without any noticeable loss of precision.

  16. Effects of sampling error and temporal correlations in population growth on process variance estimators

    Microsoft Academic Search

    David F. Staples; Mark L. Taper; Brian Dennis; Robert J. Boik

    2009-01-01

    Estimates of a population’s growth rate and process variance from time-series data are often used to calculate risk metrics\\u000a such as the probability of quasi-extinction, but temporal correlations in the data from sampling error, intrinsic population\\u000a factors, or environmental conditions can bias process variance estimators and detrimentally affect risk predictions. It has\\u000a been claimed (McNamara and Harding, Ecol Lett 7:16–20,

  17. Multiscale Systematic Error Correction via Wavelet-Based Bandsplitting in Kepler Data

    NASA Astrophysics Data System (ADS)

    Stumpe, Martin C.; Smith, Jeffrey C.; Catanzarite, Joseph H.; Van Cleve, Jeffrey E.; Jenkins, Jon M.; Twicken, Joseph D.; Girouard, Forrest R.

    2014-01-01

    The previous presearch data conditioning algorithm, PDC-MAP, for the Kepler data processing pipeline performs very well for the majority of targets in the Kepler field of view. However, for an appreciable minority, PDC-MAP has its limitations. To further minimize the number of targets for which PDC-MAP fails to perform admirably, we have developed a new method, called multiscale MAP, or msMAP. Utilizing an overcomplete discrete wavelet transform, the new method divides each light curve into multiple channels, or bands. The light curves in each band are then corrected separately, thereby allowing for a better separation of characteristic signals and improved removal of the systematics.

  18. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach.

    PubMed

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  19. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    PubMed Central

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  20. Pollution error in the h-version of the finite-element method and the local quality of a-posteriori error estimators

    E-print Network

    Mathur, Anuj

    1994-01-01

    In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...

  1. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  2. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently. PMID:22519310

  3. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  4. Estimation of Error Detection Probability and Latency of Checking Methods for a Given Circuit under Check

    Microsoft Academic Search

    Arsen Kuchukyan; H. Hagopian

    1998-01-01

    A technique of sampling of input vectors (SIV) with statistical measurements is used for the estimation of error detection probability and fault latency of different checking methods. Application of the technique for Berger code, mod3 and parity checking for combinational circuits is considered. The experimental results obtained by a Pilot Software System are presented. The technique may be implemented as

  5. A Posteriori Error Estimates and Mesh Adaptation Strategy for Discontinuous Galerkin Methods Applied to Diffusion Problems

    Microsoft Academic Search

    Béatrice Rivière; Mary F. Wheeler

    . A posteriori error estimates for locally mass conservative methods forsubsurface ow are presented. These methods are based on discontinuous approximationspaces and referred as Discontinuous Galerkin methods. In the case wherepenalty terms are added to the bilinear form, one obtain the Non-symmetric InteriorPenalty Galerkin methods. In a previous work, we proved a priori exponentialrates of convergence of the methods applied

  6. Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator

    Microsoft Academic Search

    Y. Ephraim; D. Malah

    1984-01-01

    This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the \\

  7. Analytic Study of Performance of Error Estimators for Linear Discriminant Analysis with Applications in Genomics

    E-print Network

    Zollanvari, Amin

    2012-02-14

    would like to express my gratitude to Dr. Nader Ghahramani who has been a paragon to me. vii TABLE OF CONTENTS CHAPTER Page I ESTIMATION OF THE MISCLASSIFICATION ERROR RATE : 1 A. Classi cation Problem . . . . . . . . . . . . . . . . . . . . . . . 1... 1. Complete Knowledge of Underlying Distributions . . . . 2 2. Parametric Models . . . . . . . . . . . . . . . . . . . . . . . 3 3. Non-parametric Models . . . . . . . . . . . . . . . . . . . . 3 B. Linear Discriminant Analysis...

  8. Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia

    E-print Network

    Hsieh, William

    Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia Columbia, Vancouver, B.C. V6T 1Z4, Canada Amir Shabbar Meteorological Services of Canada, Downsview, Ont. M­August Columbia River stream- flow at Donald, British Columbia, Canada. Using predictors up to the end of November

  9. Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia

    E-print Network

    Hsieh, William

    Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia Columbia, Vancouver, B.C. V6T 1Z4, Canada Amir Shabbar Meteorological Services of Canada, Downsview, Ont. M--August Columbia River stream­ flow at Donald, British Columbia, Canada. Using predictors up to the end of November

  10. A Derivation of the Unbiased Standard Error of Estimate: The General Case.

    ERIC Educational Resources Information Center

    O'Brien, Francis J., Jr.

    This paper is part of a series of applied statistics monographs intended to provide supplementary reading for applied statistics students. In the present paper, derivations of the unbiased standard error of estimate for both the raw score and standard score linear models are presented. The derivations for raw score linear models are presented in…

  11. Error estimation of recurrent neural network models trained on a finite set of initial values

    Microsoft Academic Search

    Binfan Liu; Jennie Si

    1997-01-01

    Addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a, b]m ?ℛm

  12. Error estimation of recurrent neural network models trained on a finite set of initial values

    Microsoft Academic Search

    Binfan Liu; Jennie Si

    1997-01-01

    This letter addresses the problem of estimating training error bounds of state and output trajectories for a class of recurrent neural networks as models of nonlinear dynamic systems. The bounds are obtained provided that the models have been trained on N trajectories with N independent random initial values which are uniformly distributed over [a,b]m ? Rm

  13. Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations

    E-print Network

    Hartmann, Ralf

    Error estimation and anisotropic mesh refinement for 3d laminar aerodynamic flow simulations Tobias Leichta,b , Ralf Hartmann,a,b aInstitute of Aerodynamics and Flow Technology, DLR (German Aerospace Center-dimensional laminar aerodynamic flow simulations. The optimal order symmetric interior penalty discontinuous Galerkin

  14. A recovery-based error estimator for anisotropic mesh adaptation in CFD

    Microsoft Academic Search

    P. E. Farrell; S. Micheletti; S. Perotto

    We provide a unifying framework that generalizes the 2D and 3D set- tings proposed in (32) and (17), respectively. In these two works we propose a gradient recovery type a posteriori error estimator for finite element ap- proximations on anisotropic meshes. The novelty is the inclusion of the geometrical features of the computational mesh (size, shape and orienta- tion) in

  15. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  16. Error Estimation Techniques to Refine Overlapping Aerial Image Mosaic Processes via Detected Parameters

    ERIC Educational Resources Information Center

    Bond, William Glenn

    2012-01-01

    In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

  17. A novel approach to model determination using the minimum model error estimation

    Microsoft Academic Search

    Jason R. Kolodziej; D. Joseph Mook

    2005-01-01

    The purpose of this paper is to present an algorithm for the combination of a proven nonlinear system identification technique, the minimum model error estimation algorithm (MME) with an analysis of variance (ANOVA) correlation routine where a forward stepwise procedure is implemented. The analysis of variance approach to model identification is well documented primarily in social science literature but has

  18. Efficient Small Area Estimation in the Presence of Measurement Error in Covariates

    E-print Network

    Singh, Trijya

    2012-10-19

    Small area estimation is an arena that has seen rapid development in the past 50 years, due to its widespread applicability in government projects, marketing research and many other areas. However, it is often difficult to obtain error-free data...

  19. Estimation of the error for small-sample optimal binary filter design using prior knowledge 

    E-print Network

    Sabbagh, David L

    1999-01-01

    knowledge about this distribution. This thesis gives an analytic expression of this majoring under some assumptions and shows that, using such a method, we derive a very good estimate of tile optimal error, even for a very limited amount of data....

  20. Quantifying the impact of model errors on topdown estimates of carbon monoxide emissions using satellite observations

    E-print Network

    Heald, Colette L.

    Quantifying the impact of model errors on topdown estimates of carbon monoxide emissions using the Measurement of Pollution in the Troposphere satellite instrument, to quantify the potential contribution use of inverse modeling to better quantify regional surface emissions of carbon monoxide (CO), which

  1. Interval Estimation for True Raw and Scale Scores under the Binomial Error Model

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

    2006-01-01

    Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

  2. Error estimation of bathymetric grid models derived from historic and contemporary datasets

    E-print Network

    New Hampshire, University of

    1 Error estimation of bathymetric grid models derived from historic and contemporary datasets and rapidly collecting dense bathymetric datasets. Sextants were replaced by radio navigation, then transit, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign

  3. Error estimation for reconstruction of neuronal spike firing from fast calcium imaging

    PubMed Central

    Liu, Xiuli; Lv, Xiaohua; Quan, Tingwei; Zeng, Shaoqun

    2015-01-01

    Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Calcium transients reflect neuronal spike patterns allowing for spike train reconstructed from calcium traces. The key to judging spiking train authenticity is error estimation. However, due to the lack of an appropriate mathematical model to adequately describe this spike-calcium relationship, little attention has been paid to quantifying error ranges of the reconstructed spike results. By turning attention to the data characteristics close to the reconstruction rather than to a complex mathematic model, we have provided an error estimation method for the reconstructed neuronal spiking from calcium imaging. Real false-negative and false-positive rates of 10 experimental Ca2+ traces were within the estimated error ranges and confirmed that this evaluation method was effective. Estimation performance of the reconstruction of spikes from calcium transients within a neuronal population demonstrated a reasonable evaluation of the reconstructed spikes without having real electrical signals. These results suggest that our method might be valuable for the quantification of research based on reconstructed neuronal activity, such as to affirm communication between different neurons. PMID:25780733

  4. A Posteriori Error Estimate for Front-Tracking for Nonlinear Systems of Conservation Laws

    E-print Network

    A Posteriori Error Estimate for Front-Tracking for Nonlinear Systems of Conservation Laws M-tracking approximate solutions to hyperbolic systems of nonlin- ear conservation laws. Extending the L 1 -stability-tracking approximations for nonlinear conservation laws, u t + f(u) x = 0 ; (1.1) #3; Supported by the Fonds pour la

  5. On a-posteriori pointwise error estimation using adjoint temperature and Lagrange A.K. Alekseeva

    E-print Network

    Aluffi, Paolo

    .K. Alekseeva and I. M. Navonb a Department of Aerodynamics and Heat Transfer, RSC, ENERGIA, Korolev, Moscow are specific for this method. Nevertheless, an option based on the adjoint equations is applicable for any type of equations and is not limited by the framework of finite-element analysis. "A posteriori" error estimation

  6. A posteriori pointwise error estimation for compressible fluid flows using adjoint parameters and Lagrange remainder

    E-print Network

    Aluffi, Paolo

    .K. Alekseeva and I. M. Navonb a Department of Aerodynamics and Heat Transfer, RSC, ENERGIA, Korolev, Moscow) equations. In Ref. [28] this approach is used for wave equations, in Ref [31] it is used for transport equation. In Refs. [13-15] a posteriori error estimation is obtained for Navier-Stokes and Euler equations

  7. Error estimates for Raviart-Thomas interpolation of any order on anisotropic tetrahedra

    E-print Network

    Duran, Ricardo

    Error estimates for Raviart-Thomas interpolation of any order on anisotropic tetrahedra G. Acosta1 condition and the regular vertex property, for tetrahedra. Our techniques are different from those used , for triangles or tetrahedra, where 0 j k and 1 p . These results are new even in the two dimensional case

  8. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  9. Some Improved Error Estimates for the Modi ed Method of Characteristics

    E-print Network

    Russell, Thomas F.

    Some Improved Error Estimates for the Modi#12;ed Method of Characteristics C. N. Dawson 1 , T. F birthday. 1 Introduction The modi#12;ed method of characteristics (MMOC) was #12;rst formulated approximations to advection-dominated problems. Basically, in the modi#12;ed method of characteristics, one

  10. A family of approximate solutions and explicit error estimates for the nonlinear stationary Navier-Stokes problem

    NASA Technical Reports Server (NTRS)

    Gabrielsen, R. E.; Karel, S.

    1975-01-01

    An algorithm for solving the nonlinear stationary Navier-Stokes problem is developed. Explicit error estimates are given. This mathematical technique is potentially adaptable to the separation problem.

  11. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    PubMed

    Nash, Ulrik W

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  12. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations. PMID:25222822

  13. SU-E-T-405: Robustness of Volumetric-Modulated Arc Therapy (VMAT) Plans to Systematic MLC Positional Errors

    SciTech Connect

    Qi, P; Xia, P [Cleveland Clinic, Cleveland, OH (United States)

    2014-06-01

    Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.

  14. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2? uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2? uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.

  15. Evaluation of the ability of a 2D ionisation chamber array and an EPID to detect systematic delivery errors in IMRT plans

    NASA Astrophysics Data System (ADS)

    Bawazeer, Omemh; Gray, Alison; Arumugam, Sankar; Vial, Philip; Thwaites, David; Descallar, Joseph; Holloway, Lois

    2014-03-01

    Two clinical intensity modulated radiotherapy plans were selected. Eleven plan variations were created with systematic errors introduced: Multi-Leaf Collimator (MLC) positional errors with all leaf pairs shifted in the same or the opposite direction, and collimator rotation offsets. Plans were measured using an Electronic Portal Imaging Device (EPID) and an ionisation chamber array. The plans were evaluated using gamma analysis with different criteria. The gamma pass rates remained around 95% or higher for most cases with MLC positional errors of 1 mm and 2 mm with 3%/3mm criteria. The ability of both devices to detect delivery errors was similar.

  16. Estimating the Standard Error of the Maximum Likelihood Ability Estimator in Adaptive Testing Using the Posterior-Weighted Test Information Function

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2007-01-01

    The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…

  17. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  18. Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates

    SciTech Connect

    Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

    2009-08-01

    Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

  19. Estimation via corrected scores in general semiparametric regression models with error-prone covariates

    PubMed Central

    Maity, Arnab; Apanasovich, Tatiyana V.

    2011-01-01

    This paper considers the problem of estimation in a general semiparametric regression model when error-prone covariates are modeled parametrically while covariates measured without error are modeled nonparametrically. To account for the effects of measurement error, we apply a correction to a criterion function. The specific form of the correction proposed allows Monte Carlo simulations in problems for which the direct calculation of a corrected criterion is difficult. Therefore, in contrast to methods that require solving integral equations of possibly multiple dimensions, as in the case of multiple error-prone covariates, we propose methodology which offers a simple implementation. The resulting methods are functional, they make no assumptions about the distribution of the mismeasured covariates. We utilize profile kernel and backfitting estimation methods and derive the asymptotic distribution of the resulting estimators. Through numerical studies we demonstrate the applicability of proposed methods to Poisson, logistic and multivariate Gaussian partially linear models. We show that the performance of our methods is similar to a computationally demanding alternative. Finally, we demonstrate the practical value of our methods when applied to Nevada Test Site (NTS) Thyroid Disease Study data. PMID:22773940

  20. Estimation via corrected scores in general semiparametric regression models with error-prone covariates.

    PubMed

    Maity, Arnab; Apanasovich, Tatiyana V

    2011-01-01

    This paper considers the problem of estimation in a general semiparametric regression model when error-prone covariates are modeled parametrically while covariates measured without error are modeled nonparametrically. To account for the effects of measurement error, we apply a correction to a criterion function. The specific form of the correction proposed allows Monte Carlo simulations in problems for which the direct calculation of a corrected criterion is difficult. Therefore, in contrast to methods that require solving integral equations of possibly multiple dimensions, as in the case of multiple error-prone covariates, we propose methodology which offers a simple implementation. The resulting methods are functional, they make no assumptions about the distribution of the mismeasured covariates. We utilize profile kernel and backfitting estimation methods and derive the asymptotic distribution of the resulting estimators. Through numerical studies we demonstrate the applicability of proposed methods to Poisson, logistic and multivariate Gaussian partially linear models. We show that the performance of our methods is similar to a computationally demanding alternative. Finally, we demonstrate the practical value of our methods when applied to Nevada Test Site (NTS) Thyroid Disease Study data. PMID:22773940

  1. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  2. Allowance for random dose estimation errors in atomic bomb survivor studies: a revision.

    PubMed

    Pierce, Donald A; Vaeth, Michael; Cologne, John B

    2008-07-01

    Allowing for imprecision of radiation dose estimates for A-bomb survivors followed up by the Radiation Effects Research Foundation can be improved through recent statistical methodology. Since the entire RERF dosimetry system has recently been revised, it is timely to reconsider this. We have found that the dosimetry revision itself does not warrant changes in these methods but that the new methodology does. In addition to assumptions regarding the form and magnitude of dose estimation errors, previous and current methods involve the apparent distribution of true doses in the cohort. New formulas give results conveniently and explicitly in terms of these inputs. Further, it is now possible to use assumptions about two components of the dose errors, referred to in the statistical literature as "classical" and "Berkson-type". There are indirect statistical indications, involving non-cancer biological effects, that errors may be somewhat larger than assumed before, in line with recommendations made here. Inevitably, methods must rely on uncertain assumptions about the magnitude of dose errors, and it is comforting to find that, within the range of plausibility, eventual cancer risk estimates are not very sensitive to these. PMID:18582151

  3. Conditional standard error of measurement and personality scale scores: an investigation of classical test theory estimates with four MMPI scales

    Microsoft Academic Search

    Robert Saltstone; Colin Skinner; Paul Tremblay

    2001-01-01

    This study is a preliminary examination of the fit of three classical test theory models of standard error of measurement to selected personality scale (MMPI) score retest data. The three models compared are the conventional standard error of measurement formula, Lord’s (1955: Lord, F. M. (1955). Estimating test reliability. Educational and Psychological Measurement, 15, 325–336) conditional standard error of measurement

  4. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    PubMed Central

    Pekünlü, Ekim; Özsu, ?lbilge

    2014-01-01

    There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS) testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ? n ? 8) of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]). When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93–0.98) and coefficients of variation (3.4–7.0%). The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001) and higher (p < 0.001), respectively, than those obtained using other methods. The Bland-Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9–10 using a relatively large sample size (i.e., ? 24) could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors. PMID:25414753

  5. A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates

    Microsoft Academic Search

    Benjamin G Jacob; Daniel A Griffith; Ephantus J Muturi; Erick X Caamano; John I Githure; Robert J Novak

    2009-01-01

    BACKGROUND: Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct

  6. Unified description of efficiency correction and error estimation for moments of conserved quantities in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Luo, Xiaofeng

    2015-03-01

    I provide a unified description of efficiency correction and error estimation for moments of conserved quantities in heavy-ion collisions. Moments and cumulants are expressed in terms of the factorial moments, which can be easily corrected for the efficiency effect. By deriving the covariance between factorial moments, one can obtain the general error formula for the efficiency corrected moments based on the error propagation derived from the Delta theorem. The Skellam-distribution-based Monto Carlo simulation is used to test the Delta theorem and bootstrap error estimation methods. The statistical errors calculated from the two methods can well reflect the statistical fluctuations of the efficiency corrected moments.

  7. Macroscale water fluxes 1. Quantifying errors in the estimation of basin mean precipitation

    NASA Astrophysics Data System (ADS)

    Milly, P. C. D.; Dunne, K. A.

    2002-10-01

    Developments in analysis and modeling of continental water and energy balances are hindered by the limited availability and quality of observational data. The lack of information on error characteristics of basin water supply is an especially serious limitation. Here we describe the development and testing of methods for quantifying several errors in basin mean precipitation, both in the long-term mean and in the monthly and annual anomalies. To quantify errors in the long-term mean, two error indices are developed and tested with positive results. The first provides an estimate of the variance of the spatial sampling error of long-term basin mean precipitation obtained from a gauge network, in the absence of orographic effects; this estimate is obtained by use only of the gauge records. The second gives a simple estimate of the basin mean orographic bias as a function of the topographic structure of the basin and the locations of gauges therein. Neither index requires restrictive statistical assumptions (such as spatial homogeneity) about the precipitation process. Adjustments of precipitation for gauge bias and estimates of the adjustment errors are made by applying results of a previous study. Additionally, standard correlation-based methods are applied for the quantification of spatial sampling errors in the estimation of monthly and annual values of basin mean precipitation. These methods also perform well, as indicated by network subsampling tests in densely gauged basins. The methods are developed and applied with data for 175 large (median area of 51,000 km2) river basins of the world for which contemporaneous, continuous (missing fewer than 2% of data values), long-term (median record length of 54 years) river discharge records are also available. Spatial coverage of the resulting river basin data set is greatest in the middle latitudes, though many basins are located in the tropics and the high latitudes, and the data set spans the major climatic and vegetation zones of the world. This new data set can be applied in diagnostic and theoretical studies of water balance of large basins and in the evaluation of performance of global models of land water balance.

  8. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests.

    PubMed

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-01

    The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2(4) full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors' impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors' influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world. PMID:25151444

  9. Expected Estimating Equation using Calibration Data for Generalized Linear Models with a Mixture of Berkson and Classical Errors in Covariates

    PubMed Central

    de Dieu Tapsoba, Jean; Lee, Shen-Ming; Wang, Ching-Yun

    2013-01-01

    Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. Its finite-sample performance is investigated numerically. Our method is illustrated by an application to real data from an HIV vaccine study. PMID:24009099

  10. Expected estimating equation using calibration data for generalized linear models with a mixture of Berkson and classical errors in covariates.

    PubMed

    Tapsoba, Jean de Dieu; Lee, Shen-Ming; Wang, Ching-Yun

    2014-02-20

    Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors, and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. We investigated its finite-sample performance numerically. Our method is illustrated by an application to real data from an HIV vaccine study. PMID:24009099

  11. Using heteroskedasticity-consistent standard error estimators in OLS regression: an introduction and software implementation.

    PubMed

    Hayes, Andrew F; Cai, Li

    2007-11-01

    Homoskedasticity is an important assumption in ordinary least squares (OLS) regression. Although the estimator of the regression parameters in OLS regression is unbiased when the homoskedasticity assumption is violated, the estimator of the covariance matrix of the parameter estimates can be biased and inconsistent under heteroskedasticity, which can produce significance tests and confidence intervals that can be liberal or conservative. After a brief description of heteroskedasticity and its effects on inference in OLS regression, we discuss a family of heteroskedasticity-consistent standard error estimators for OLS regression and argue investigators should routinely use one of these estimators when conducting hypothesis tests using OLS regression. To facilitate the adoption of this recommendation, we provide easy-to-use SPSS and SAS macros to implement the procedures discussed here. PMID:18183883

  12. Mass load estimation errors utilizing grab sampling strategies in a karst watershed

    USGS Publications Warehouse

    Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

    2003-01-01

    Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

  13. Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration

    NASA Astrophysics Data System (ADS)

    Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.

    2014-12-01

    Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability. The system will return precise and accurate displacements at a high rate for real-time earthquake monitoring.

  14. Assessing the uncertainties on seismic source parameters: Towards realistic error estimates for centroid-moment-tensor determinations

    NASA Astrophysics Data System (ADS)

    Valentine, Andrew P.; Trampert, Jeannot

    2012-11-01

    The centroid-moment-tensor (CMT) algorithm provides a straightforward, rapid method for the determination of seismic source parameters from waveform data. As such, it has found widespread application, and catalogues of CMT solutions - particularly the catalogue maintained by the Global CMT Project - are routinely used by geoscientists. However, there have been few attempts to quantify the uncertainties associated with any given CMT determination: whilst catalogues typically quote a 'standard error' for each source parameter, these are generally accepted to significantly underestimate the true scale of uncertainty, as all systematic effects are ignored. This prevents users of source parameters from properly assessing possible impacts of this uncertainty upon their own analysis. The CMT algorithm determines the best-fitting source parameters within a particular modelling framework, but any deficiencies in this framework may lead to systematic errors. As a result, the minimum-misfit source may not be equivalent to the 'true' source. We suggest a pragmatic solution to uncertainty assessment, based on accepting that any 'low-misfit' source may be a plausible model for a given event. The definition of 'low-misfit' should be based upon an assessment of the scale of potential systematic effects. We set out how this can be used to estimate the range of values that each parameter might take, by considering the curvature of the misfit function as minimised by the CMT algorithm. This approach is computationally efficient, with cost similar to that of performing an additional iteration during CMT inversion for each source parameter to be considered. The source inversion process is sensitive to the various choices that must be made regarding dataset, earth model and inversion strategy, and for best results, uncertainty assessment should be performed using the same choices. Unfortunately, this information is rarely available when sources are obtained from catalogues. As already indicated by Valentine and Woodhouse (2010), researchers conducting comparisons between data and synthetic waveforms must ensure that their approach to forward-modelling is consistent with the source parameters used; in practice, this suggests that they should consider performing their own source inversions. However, it is possible to obtain rough estimates of uncertainty using only forward-modelling.

  15. Forest canopy height estimation using ICESat/GLAS data and error factor analysis in Hokkaido, Japan

    NASA Astrophysics Data System (ADS)

    Hayashi, Masato; Saigusa, Nobuko; Oguma, Hiroyuki; Yamagata, Yoshiki

    2013-07-01

    Spaceborne light detection and ranging (LiDAR) enables us to obtain information about vertical forest structure directly, and it has often been used to measure forest canopy height or above-ground biomass. However, little attention has been given to comparisons of the accuracy of the different estimation methods of canopy height or to the evaluation of the error factors in canopy height estimation. In this study, we tested three methods of estimating canopy height using the Geoscience Laser Altimeter System (GLAS) onboard NASA's Ice, Cloud, and land Elevation Satellite (ICESat), and evaluated several factors that affected accuracy. Our study areas were Tomakomai and Kushiro, two forested areas on Hokkaido in Japan. The accuracy of the canopy height estimates was verified by ground-based measurements. We also conducted a multivariate analysis using quantification theory type I (multiple-regression analysis of qualitative data) and identified the observation conditions that had a large influence on estimation accuracy. The method using the digital elevation model was the most accurate, with a root-mean-square error (RMSE) of 3.2 m. However, GLAS data with a low signal-to-noise ratio (?10.0) and that taken from September to October 2009 had to be excluded from the analysis because the estimation accuracy of canopy height was remarkably low. After these data were excluded, the multivariate analysis showed that surface slope had the greatest effect on estimation accuracy, and the accuracy dropped the most in steeply sloped areas. We developed a second model with two equations to estimate canopy height depending on the surface slope, which improved estimation accuracy (RMSE = 2.8 m). These results should prove useful and provide practical suggestions for estimating forest canopy height using spaceborne LiDAR.

  16. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  17. A posteriori error estimates with application of adaptive mesh refinement for thermal multiphase compositional flows in porous

    E-print Network

    Paris-Sud XI, Université de

    A posteriori error estimates with application of adaptive mesh refinement for thermal multiphase. Moreover, a space­time adaptive mesh refinement algorithm based on the estimators is proposed. We consider gain in term of mesh cells can be achieved. Key words: a posteriori error analysis, adaptive mesh

  18. Can a more realistic model error structure improve the parameter estimation in modelling the dynamics of sh populations?

    E-print Network

    Chen, Yong

    or applying an estimation method that is robust to the error structure assumption in modelling the dynamicsCan a more realistic model error structure improve the parameter estimation in modelling the dynamics of ®sh populations? Y. Chena,* , J.E. Paloheimob a Fisheries Conservation Chair Program, Fisheries

  19. Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays

    USGS Publications Warehouse

    Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

    2007-01-01

    When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

  20. Finding systematic errors in tomographic data: Characterising ion-trap quantum computers

    NASA Astrophysics Data System (ADS)

    Monz, Thomas

    2013-03-01

    Quantum state tomography has become a standard tool in quantum information processing to extract information about an unknown state. Several recipes exist to post-process the data and obtain a density matrix; for instance using maximum-likelihood estimation. These evaluations, and all conclusions taken from the density matrices, however, rely on valid data - meaning data that agrees both with the measurement model and a quantum model within statistical uncertainties. Given the wide span of possible discrepancies between laboratory and theory model, data ought to be tested for its validity prior to any subsequent evaluation. The presented talk will provide an overview of such tests which are easily implemented. These will then be applied onto tomographic data from an ion-trap quantum computer.

  1. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    PubMed Central

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and 14-month-olds’ responses to the omission of a recurring target, on either a 3- or 5-s cycle. At all ages (a) both fixation and pupil dilation measures were time locked to the periodicity of the test interval, and (b) estimation errors grew linearly with the length of the interval, suggesting that trademark interval timing is in place from 4 months. PMID:24979472

  2. Summary We analyzed assumptions and measurement er-rors in estimating canopy transpiration (EL) from sap flux (JS)

    E-print Network

    Ewers, Brent E.

    Summary We analyzed assumptions and measurement er- rors in estimating canopy transpiration (EL by variation in D. Therefore, JS was used to estimate transpiration, after accounting for radial patterns, relative humidity, and temperature measurements, to keep errors in GS estimates to less than 10%, estimates

  3. Estimating market power in homogeneous product markets using a composed error model

    E-print Network

    Orea, Luis; Steinbuks, Jevgenijs

    2012-04-25

    on Empirical Methods in Energy Economics, the 9th Annual International Industrial Organization Conference, the Spanish Economic Association Annual Congress, and the American Aconomic Association Annual Meetings for their helpful comments and suggestions. All... -power component. The main contribution of the proposed approach is about the way the asymmetry of the composed error term is employed to get firm-specific market power estimates. While the first...

  4. Motion-Induced Phase Error Estimation and Correction in 3D Diffusion Tensor Imaging

    Microsoft Academic Search

    Anh T. Van; Diego Hernando; Bradley P. Sutton

    2011-01-01

    A multishot data acquisition strategy is one way to mitigate B0 distortion and blurring for high-resolution diffusion-weighted magnetic resonance imaging experiments. However, different object motions that take place during different shots cause phase inconsistencies in the data, leading to signifi- cant image artifacts. This work proposes a maximum likelihood estimation and -space correction of motion-induced phase errors in 3D multishot

  5. On the extreme accuracy of maximum entropy spectrum estimation from an error-free autocorrelation function

    Microsoft Academic Search

    Paul F. Fougere; Hanscom AFB

    1987-01-01

    The Maximum Entropy Method (MEM) is compared to the periodogram method (DFT) for the estimation of line spectra given an error-free autocorrelation function (ACF). In one computer simulation run, a 250 lag ACF was generated as the sum of 63 cosinusoids with given amplitudes, Ai, and wave numbers, fi. The wave numbers cover a band from 0 to 89.239 cm-1with

  6. Filtering Error Estimates and Order of Accuracy via the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2011-02-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise. The concept of the order of accuracy of a filter is introduced and used as an organizing principle to compare the accuracy of different filters.

  7. Statistical Analysis of CCD Data: Error Analysis/Noise Theorem

    E-print Network

    Peletier, Reynier

    Statistical Analysis of CCD Data: Error Analysis/Noise Theorem Why Statistical Approach? Systematic Distribution Statistical CCD Data Analysis #12;Why do we need statistical analysis? (= Why do we need to worry Errors Random Errors (= Statistical Errors) Accuracy and Precision Best Estimator: Mean, Median

  8. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  9. Error analysis of leaf area estimates made from allometric regression models

    NASA Technical Reports Server (NTRS)

    Feiveson, A. H.; Chhikara, R. S.

    1986-01-01

    Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).

  10. Height Estimation and Error Assessment of Inland Water Level Time Series calculated by a Kalman Filter Approach using Multi-Mission Satellite Altimetry

    NASA Astrophysics Data System (ADS)

    Schwatke, Christian; Dettmering, Denise; Boergens, Eva

    2015-04-01

    Originally designed for open ocean applications, satellite radar altimetry can also contribute promising results over inland waters. Its measurements help to understand the water cycle of the system earth and makes altimetry to a very useful instrument for hydrology. In this paper, we present our methodology for estimating water level time series over lakes, rivers, reservoirs, and wetlands. Furthermore, the error estimation of the resulting water level time series is demonstrated. For computing the water level time series multi-mission satellite altimetry data is used. The estimation is based on altimeter data from Topex, Jason-1, Jason-2, Geosat, IceSAT, GFO, ERS-2, Envisat, Cryosat, HY-2A, and Saral/Altika - depending on the location of the water body. According to the extent of the investigated water body 1Hz, high-frequent or retracked altimeter measurements can be used. Classification methods such as Support Vector Machine (SVM) and Support Vector Regression (SVR) are applied for the classification of altimeter waveforms and for rejecting outliers. For estimating the water levels we use a Kalman filter approach applied to the grid nodes of a hexagonal grid covering the water body of interest. After applying an error limit on the resulting water level heights of each grid node, a weighted average water level per point of time is derived referring to one reference location. For the estimation of water level height accuracies, at first, the formal errors are computed applying a full error propagation within Kalman filtering. Hereby, the precision of the input measurements are introduced by using the standard deviation of the water level height along the altimeter track. In addition to the resulting formal errors of water level heights, uncertainties of the applied geophysical correction (e.g. wet troposphere, ionosphere, etc.) and systematic error effects are taken into account to achieve more realistic error estimates. For validation of the time series, we compare our results with gauges and external inland altimeter databases (e.g. Hydroweb). We yield very high correlations between absolute water level height time series from altimetry and gauges. Moreover, the comparisons of water level heights are also used for the validation of the error assessment. More than 200 water level time series were already computed and made public available via the "Database for Hydrological Time Series of Inland Waters" (DAHITI) which is available via http://dahiti.dgfi.tum.de .

  11. Back-and-forth Operation of State Observers and Norm Estimation of Estimation Error

    E-print Network

    be recalled that the recursive form of the Kalman filter is preferred over its closed form for the same with the plant, this paper proposes a state estimation algorithm that executes Luenberger observers in a back of measurement noise is much relieved. Moreover, by operating the observer in the proposed manner, we obtain

  12. Effects of flight instrumentation errors on the estimation of aircraft stability and control derivatives. [including Monte Carlo analysis

    NASA Technical Reports Server (NTRS)

    Bryant, W. H.; Hodge, W. F.

    1974-01-01

    An error analysis program based on an output error estimation method was used to evaluate the effects of sensor and instrumentation errors on the estimation of aircraft stability and control derivatives. A Monte Carlo analysis was performed using simulated flight data for a high performance military aircraft, a large commercial transport, and a small general aviation aircraft for typical cruise flight conditions. The effects of varying the input sequence and combinations of the sensor and instrumentation errors were investigated. The results indicate that both the parameter accuracy and the corresponding measurement trajectory fit error can be significantly affected. Of the error sources considered, instrumentation lags and control measurement errors were found to be most significant.

  13. Evaluation of errors made in solar irradiance estimation due to averaging the Angstrom turbidity coefficient

    NASA Astrophysics Data System (ADS)

    Calinoiu, Delia-Gabriela; Stefu, Nicoleta; Paulescu, Marius; Trif-Tordai, Gavril?; Mares, Oana; Paulescu, Eugenia; Boata, Remus; Pop, Nicolina; Pacurar, Angel

    2014-12-01

    Even though the monitoring of solar radiation experienced a vast progress in the recent years both in terms of expanding the measurement networks and increasing the data quality, the number of stations is still too small to achieve accurate global coverage. Alternatively, various models for estimating solar radiation are exploited in many applications. Choosing a model is often limited by the availability of the meteorological parameters required for its running. In many cases the current values of the parameters are replaced with daily, monthly or even yearly average values. This paper deals with the evaluation of the error made in estimating global solar irradiance by using an average value of the Angstrom turbidity coefficient instead of its current value. A simple equation relating the relative variation of the global solar irradiance and the relative variation of the Angstrom turbidity coefficient is established. The theoretical result is complemented by a quantitative assessment of the errors made when hourly, daily, monthly or yearly average values of the Angstrom turbidity coefficient are used at the entry of a parametric solar irradiance model. The study was conducted with data recorded in 2012 at two AERONET stations in Romania. It is shown that the relative errors in estimating global solar irradiance (GHI) due to inadequate consideration of Angstrom turbidity coefficient may be very high, even exceeding 20%. However, when an hourly or a daily average value is used instead of the current value of the Angstrom turbidity coefficient, the relative errors are acceptably small, in general less than 5%. All results prove that in order to correctly reproduce GHI for various particular aerosol loadings of the atmosphere, the parametric models should rely on hourly or daily Angstrom turbidity coefficient values rather than on the more usual monthly or yearly average data, if currently measured data is not available.

  14. Mean square displacements with error estimates from non-equidistant time-step kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Leetmaa, Mikael; Skorodumova, Natalia V.

    2015-06-01

    We present a method to calculate mean square displacements (MSD) with error estimates from kinetic Monte Carlo (KMC) simulations of diffusion processes with non-equidistant time-steps. An analytical solution for estimating the errors is presented for the special case of one moving particle at fixed rate constant. The method is generalized to an efficient computational algorithm that can handle any number of moving particles or different rates in the simulated system. We show with examples that the proposed method gives the correct statistical error when the MSD curve describes pure Brownian motion and can otherwise be used as an upper bound for the true error.

  15. A learning-based wrapper method to correct systematic errors in automatic image segmentation: Consistently improved performance in hippocampus, cortex and brain segmentation

    Microsoft Academic Search

    Hongzhi Wang; Sandhitsu R. Das; Jung Wook Suh; Murat Altinay; John Pluta; Caryne Craige; Brian Avants; Paul A. Yushkevich

    2011-01-01

    We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method.

  16. Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle tracking method

    NASA Astrophysics Data System (ADS)

    Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru

    2015-07-01

    High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.

  17. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  18. Error Analysis for Estimation of Greenland Ice Sheet Accumulation Rates from InSAR Data

    NASA Astrophysics Data System (ADS)

    Chen, A. C.; Zebker, H. A.

    2013-12-01

    Forming a mass budget for the Greenland Ice Sheet requires accurate measurements of both accumulation and ablation. Currently, most mass budgets use accumulation rate data from sparse in-situ ice core data, sometimes in conjunction with results from relatively low-resolution climate models. Yet there have also been attempts to estimate accumulation rates from remote sensing data, including SAR, InSAR, and satellite radar scatterometry data. However, the sensitivities, error sources, and confidence intervals in these remote sensing methods have not been well-characterized. We develop an error analysis for estimates of Greenland Ice Sheet accumulation rates in the dry-snow zone using SAR brightness and InSAR coherence data. The estimates are generated by inverting a forward model based on firn structure and electromagnetic scattering. We can then examine the associated error bars and sensitivity. We also model how these change when spatial smoothness assumptions are introduced and a regularized inversion is used. In this study, we use SAR and InSAR data from the L-band ALOS-PALSAR instrument (23-centimeter carrier wavelength) as a test-bed and in-situ measurements published by Bales et.al. for comparison [1]. Finally, we use simulations to examine the ways in which estimation accuracy varies between X-band, C-band and L-band experiments. [1] R. C. Bales, et.al. 'Accumulation over the Greenland ice sheet from historical and recent records,' Journal of Geophysical Research, vol. 106, pp. 33813-33825, 2001.

  19. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  20. Convergence Characteristics of D-Sate-Observer Using Speed Estimate with Huge Error for Sensorless PMSM Drive

    NASA Astrophysics Data System (ADS)

    Shinnaka, Shinji

    This paper presents new analyses of convergence characteristics of the D-state-observer using speed estimate with huge error for sensorless drive of permanent-magnet synchronous motors, and demonstrates high convergence performance of the D-state-observer through experiments. It is analytically shown that the maximum error of steady-state phase estimate by the D-state-observer can remain within ±?/4(rad) even if the speed estimate with huge constant error is used in the observer. The experiments demonstrate that the D-state-observer accompanied with the generalized integral-type PLL method can estimate correctly actual phase and speed of the rotor in all sensorless operation range even if the estimations start with huge initial errors.

  1. Outcome of adverse events and medical errors in the intensive care unit: a systematic review and meta-analysis.

    PubMed

    Ahmed, Adil H; Giri, Jyothsna; Kashyap, Rahul; Singh, Balwinder; Dong, Yue; Kilickaya, Oguz; Erwin, Patricia J; Murad, M Hassan; Pickering, Brian W

    2015-01-01

    Adverse events and medical errors (AEs/MEs) are more likely to occur in the intensive care unit (ICU). Information about the incidence and outcomes of such events is conflicting. A systematic review and meta-analysis were conducted to examine the effects of MEs/AEs on mortality and hospital and ICU lengths of stay among ICU patients. Potentially eligible studies were identified from 4 major databases. Of 902 studies screened, 12 met the inclusion criteria, 10 of which are included in the quantitative analysis. Patients with 1 or more MEs/AEs (vs no MEs/AEs) had a nonsignificant increase in mortality (odds ratio = 1.5; 95% confidence interval [CI] = 0.98-2.14) but significantly longer hospital and ICU stays; the mean difference (95% CI) was 8.9 (3.3-14.7) days for hospital stay and 6.8 (0.2-13.4) days for ICU. The ICU environment is associated with a substantial incidence of MEs/AEs, and patients with MEs/AEs have worse outcomes than those with no MEs/AEs. PMID:24357344

  2. Anisotropic finite elements for the Stokes problem: a posteriori error estimator and adaptive mesh

    NASA Astrophysics Data System (ADS)

    Randrianarivony, Maharavo

    2004-08-01

    We propose an a posteriori error estimator for the Stokes problem using the Crouzeix-Raviart/P0 pair. Its efficiency and reliability on highly stretched meshes are investigated. The analysis is based on hierarchical space splitting whose main ingredients are the strengthened Cauchy-Schwarz inequality and the saturation assumption. We give a theoretical proof of a method to enrich the Crouzeix-Raviart element so that the strengthened Cauchy constant is always bounded away from unity independently of the aspect ratio. An anisotropic self-adaptive mesh refinement approach for which the saturation assumption is valid will be described. Our theory is confirmed by corroborative numerical tests which include an internal layer, a boundary layer, a re-entrant corner and a crack simulation. A comparison of the exact error and the a posteriori one with respect to the aspect ratio will be demonstrated.

  3. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (principal investigators)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  4. Learning coefficient of generalization error in Bayesian estimation and vandermonde matrix-type singularity.

    PubMed

    Aoyagi, Miki; Nagata, Kenji

    2012-06-01

    The term algebraic statistics arises from the study of probabilistic models and techniques for statistical inference using methods from algebra and geometry (Sturmfels, 2009 ). The purpose of our study is to consider the generalization error and stochastic complexity in learning theory by using the log-canonical threshold in algebraic geometry. Such thresholds correspond to the main term of the generalization error in Bayesian estimation, which is called a learning coefficient (Watanabe, 2001a , 2001b ). The learning coefficient serves to measure the learning efficiencies in hierarchical learning models. In this letter, we consider learning coefficients for Vandermonde matrix-type singularities, by using a new approach: focusing on the generators of the ideal, which defines singularities. We give tight new bound values of learning coefficients for the Vandermonde matrix-type singularities and the explicit values with certain conditions. By applying our results, we can show the learning coefficients of three-layered neural networks and normal mixture models. PMID:22295979

  5. Error analysis based on nuclear liquid drop model

    E-print Network

    Cenxi Yuan

    2015-07-12

    A new method is suggested to be used to estimate the statistical and systematic error of a theoretical model. As an example, such method is applied to analysis the total error of the nuclear binding energies between the observed values and the theoretical results from the liquid drop model (LDM). Based on the large number of data, the distribution of the error is supposed to be two normal distributions, standing for statistical and systematic error, respectively. The standard deviation of the statistical part, $\\sigma_{stat}$, can be estimated by calculating with randomly generated parameters following normal distribution. The standard deviation of the systematic part, mean value of the statistical and systematic part can be obtain by minimizing the moments of the distribution of the total error with estimated $\\sigma_{stat}$. The estimated distribution of the statistical and systematic error can well describe the distribution of the total error. The statistical and systematic error are estimated from LDM with and without the consideration of the shell effect. It can be seen that both statistical and systematic error are reduced after the inclusion of the shell correction.

  6. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  7. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects.

    PubMed

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S; Barrett, Brendan T; McGraw, Paul V

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg(-1) at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  8. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg?1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25624464

  9. On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets

    NASA Astrophysics Data System (ADS)

    Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.

    2001-05-01

    There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.

  10. Wavelet-Generalized Least Squares: A New BLU Estimator of Linear Regression Models with 1\\/f Errors

    Microsoft Academic Search

    M. J. Fadili; E. T. Bullmore

    2002-01-01

    Long-memory noise is common to many areas of signal processing and can seriously confound estimation of linear regression model parameters and their standard errors. Classical autoregressive moving average (ARMA) methods can adequately address the problem of linear time invariant, short-memory errors but may be inefficient and\\/or insufficient to secure type 1 error control in the context of fractal or scale

  11. Nuclear gene phylogeography using PHASE: dealing with unresolved genotypes, lost alleles, and systematic bias in parameter estimation

    PubMed Central

    2010-01-01

    Background A widely-used approach for screening nuclear DNA markers is to obtain sequence data and use bioinformatic algorithms to estimate which two alleles are present in heterozygous individuals. It is common practice to omit unresolved genotypes from downstream analyses, but the implications of this have not been investigated. We evaluated the haplotype reconstruction method implemented by PHASE in the context of phylogeographic applications. Empirical sequence datasets from five non-coding nuclear loci with gametic phase ascribed by molecular approaches were coupled with simulated datasets to investigate three key issues: (1) haplotype reconstruction error rates and the nature of inference errors, (2) dataset features and genotypic configurations that drive haplotype reconstruction uncertainty, and (3) impacts of omitting unresolved genotypes on levels of observed phylogenetic diversity and the accuracy of downstream phylogeographic analyses. Results We found that PHASE usually had very low false-positives (i.e., a low rate of confidently inferring haplotype pairs that were incorrect). The majority of genotypes that could not be resolved with high confidence included an allele occurring only once in a dataset, and genotypic configurations involving two low-frequency alleles were disproportionately represented in the pool of unresolved genotypes. The standard practice of omitting unresolved genotypes from downstream analyses can lead to considerable reductions in overall phylogenetic diversity that is skewed towards the loss of alleles with larger-than-average pairwise sequence divergences, and in turn, this causes systematic bias in estimates of important population genetic parameters. Conclusions A combination of experimental and computational approaches for resolving phase of segregating sites in phylogeographic applications is essential. We outline practical approaches to mitigating potential impacts of computational haplotype reconstruction on phylogeographic inferences. With targeted application of laboratory procedures that enable unambiguous phase determination via physical isolation of alleles from diploid PCR products, relatively little investment of time and effort is needed to overcome the observed biases. PMID:20429950

  12. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  13. Estimating random errors due to shot noise in backscatter lidar observations.

    PubMed

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

    2006-06-20

    We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment. PMID:16778954

  14. Error estimates for a finite element method for the drift-diffusion semiconductor device equations

    SciTech Connect

    Chen, Z.; Cockburn, B. (Univ. of Minnesota, Minneapolis, MN (United States))

    1994-08-01

    In this paper, optimal error estimates are obtained for a method for numerically solving the so-called unipolar model (a one-dimensional simplified version of the drift-diffusion semi-conductor device equations). The numerical method combines a mixed finite element method using a continuous piecewise-linear approximation of the electric field with an explicit upwinding finite element method using a piecewise-constant approximation of the electron concentration. For initial and boundary data ensuring that the electron concentration is smooth, the L[sup [infinity

  15. Programming errors contribute to death from patient-controlled analgesia: case report and estimate of probability

    Microsoft Academic Search

    Kim J. Vicente; Karima Kada-Bekhaled; Gillian Hillel; Andrea Cassano; Beverley A. Orser

    2003-01-01

    Purpose  To identify the factors that threaten patient safety when using patient-controlled analgesia (PCA) and to obtain an evidence-based\\u000a estimate of the probability of death from user programming errors associated with PCA,\\u000a \\u000a \\u000a \\u000a Clinical features  A 19-yr-old woman underwent Cesarean section and delivered a healthy infant, Postoperatively morphine sulfate (2 mg bolus,\\u000a lockout interval of six minutes, four-hour limit of 30 mg) was

  16. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    ERIC Educational Resources Information Center

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  17. Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

    SciTech Connect

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON, FRANCE; Vassilevski, Yuri [RUSSIA

    2009-01-01

    We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

  18. Variations in analytical methodology for estimating costs of hospital-acquired infections: a systematic review.

    PubMed

    Fukuda, H; Lee, J; Imanaka, Y

    2011-02-01

    Quantifying the additional costs of hospital-acquired infections (COHAI) is essential for developing cost-effective infection control measures. The methodological approaches to estimate these costs include case reviews, matched comparisons and regression analyses. The choice of cost estimation methodologies can affect the accuracy of the resulting estimates, however, with regression analyses generally able to avoid the bias pitfalls of the other methods. The objective of this study was to elucidate the distributions and trends in cost estimation methodologies in published studies that have produced COHAI estimates. We conducted systematic searches of peer-reviewed publications that produced cost estimates attributable to hospital-acquired infection in MEDLINE from 1980 to 2006. Shifts in methodologies at 10-year intervals were analysed using Fisher's exact test. The most frequent method of COHAI estimation methodology was multiple matched comparisons (59.6%), followed by regression models (25.8%), and case reviews (7.9%). There were significant increases in studies that used regression models and decreases in matched comparisons through the 1980s, 1990s and post-2000 (P = 0.033). Whereas regression analyses have become more frequently used for COHAI estimations in recent years, matched comparisons are still used in more than half of COHAI estimation studies. Researchers need to be more discerning in the selection of methodologies for their analyses, and comparative analyses are needed to identify more accurate estimation methods. This review provides a resource for analysts to overview the distribution, trends, advantages and pitfalls of the various existing COHAI estimation methodologies. PMID:21145131

  19. A Formula for the Standard Error of Estimate of Deviation Quotients on Short Forms of Wechsler's Scales.

    ERIC Educational Resources Information Center

    Silverstein, A. B.

    1985-01-01

    A formula is presented for the standard error of estimate of Deviation Quotients (DQs). The formula is shown to perform well when used with data on short forms of two of Wechsler's scales. (Author/JAC)

  20. Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case

    E-print Network

    Zanolin, M.

    In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical ...

  1. Operator-adapted finite element wavelets : theory and applications to a posteriori error estimation and adaptive computational modeling

    E-print Network

    Sudarshan, Raghunathan, 1978-

    2005-01-01

    We propose a simple and unified approach for a posteriori error estimation and adaptive mesh refinement in finite element analysis using multiresolution signal processing principles. Given a sequence of nested discretizations ...

  2. Uniform-Penalty Inversion of Multiexponential Decay Data. II. Data Spacing, T2 Data, Systematic Data Errors, and Diagnostics

    NASA Astrophysics Data System (ADS)

    Borgia, G. C.; Brown, R. J. S.; Fantazzini, P.

    2000-12-01

    The basic method of UPEN (uniform penalty inversion of multiexponential decay data) is given in an earlier publication (Borgia et al., J. Magn. Reson. 132, 65-77 (1998)), which also discusses the effects of noise, constraints, and smoothing on the resolution or apparent resolution of features of a computed distribution of relaxation times. UPEN applies negative feedback to a regularization penalty, allowing stronger smoothing for a broad feature than for a sharp line. This avoids unnecessarily broadening the sharp line and/or breaking the wide peak or tail into several peaks that the relaxation data do not demand to be separate. The experimental and artificial data presented earlier were T1 data, and all had fixed data spacings, uniform in log-time. However, for T2 data, usually spaced uniformly in linear time, or for data spaced in any manner, we have found that the data spacing does not enter explicitly into the computation. The present work shows the extension of UPEN to T2 data, including the averaging of data in windows and the use of the corresponding weighting factors in the computation. Measures are implemented to control portions of computed distributions extending beyond the data range. The input smoothing parameters in UPEN are normally fixed, rather than data dependent. A major problem arises, especially at high signal-to-noise ratios, when UPEN is applied to data sets with systematic errors due to instrumental nonidealities or adjustment problems. For instance, a relaxation curve for a wide line can be narrowed by an artificial downward bending of the relaxation curve. Diagnostic parameters are generated to help identify data problems, and the diagnostics are applied in several examples, with particular attention to the meaningful resolution of two closely spaced peaks in a distribution of relaxation times. Where feasible, processing with UPEN in nearly real time should help identify data problems while further instrument adjustments can still be made. The need for the nonnegative constraint is greatly reduced in UPEN, and preliminary processing without this constraint helps identify data sets for which application of the nonnegative constraint is too expensive in terms of error of fit for the data set to represent sums of decaying positive exponentials plus random noise.

  3. Power-spectrum analysis of Super-Kamiokande solar neutrino data, taking into account asymmetry in the error estimates

    E-print Network

    P. A. Sturrock; J. D. Scargle

    2006-06-20

    The purpose of this article is to carry out a power-spectrum analysis (based on likelihood methods) of the Super-Kamiokande 5-day dataset that takes account of the asymmetry in the error estimates. Whereas the likelihood analysis involves a linear optimization procedure for symmetrical error estimates, it involves a nonlinear optimization procedure for asymmetrical error estimates. We find that for most frequencies there is little difference between the power spectra derived from analyses of symmetrized error estimates and from asymmetrical error estimates. However, this proves not to be the case for the principal peak in the power spectra, which is found at 9.43 yr-1. A likelihood analysis which allows for a "floating offset" and takes account of the start time and end time of each bin and of the flux estimate and the symmetrized error estimate leads to a power of 11.24 for this peak. A Monte Carlo analysis shows that there is a chance of only 1% of finding a peak this big or bigger in the frequency band 1 - 36 yr-1 (the widest band that avoids artificial peaks). On the other hand, an analysis that takes account of the error asymmetry leads to a peak with power 13.24 at that frequency. A Monte Carlo analysis shows that there is a chance of only 0.1% of finding a peak this big or bigger in that frequency band 1 - 36 yr-1. From this perspective, power spectrum analysis that takes account of asymmetry of the error estimates gives evidence for variability that is significant at the 99.9% level. We comment briefly on an apparent discrepancy between power spectrum analyses of the Super-Kamiokande and SNO solar neutrino experiments.

  4. Adaptive importance sampling for bit error rate estimation over fading channels

    NASA Astrophysics Data System (ADS)

    Zhuang, W.

    1994-05-01

    Computer simulation is an essential approach to access the performance of mobile and portable communications systems. However, in the case of a slowly fading channel (where the number of fading cycles dominantly determines the confidence interval of the simulation results), computer simulation time can be prohibitively long in order to obtain an accurate bit error rate (BER) estimate using the Monte Carlo (MC) method. This paper develops an adaptive importance sampling (AIS) technique for BER estimation over Rayleigh fading channels. The AIS simultaneously biases statistical properties of both channel fading process and input Gaussian noise and adaptively searches for the optimal biased density function during the course of simulation. The AIS technique is applied to analyze the BER performance of QPSK with multiple-symbol differential detection. Computer simulation results show that the AIS technique significantly reduces the simulation time compared with the conventional MC technique, and simplifies the procedure of selecting the optimal biased density function.

  5. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    SciTech Connect

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  6. An examination of estimation and specification error biases in estimates of option prices generated by Black-Scholes and Cox-Ross models

    Microsoft Academic Search

    A. K. M. Shamsul Alam

    1992-01-01

    In this paper the author identifies and examines the estimation and specification error biases of the Black-Scholes and Cox-Ross\\u000a models by using both analytical and monte-carlo simulation techniques. Several hypotheses are tested. The central hypothesis\\u000a is whether or not the estimation error bias in the correctly specified model is large enough to make researchers mistakenly\\u000a pick the “wrong” model as

  7. Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

    2012-12-01

    Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

  8. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    SciTech Connect

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  9. Computational error estimates for Born-Oppenheimer molecular dynamics with nearly crossing potential surfaces

    E-print Network

    Christian Bayer; Hakon Hoel; Ashraful Kadir; Petr Plechac; Mattias Sandberg; Anders Szepessy

    2015-05-12

    The difference of the values of observables for the time-independent Schroedinger equation, with matrix valued potentials, and the values of observables for ab initio Born-Oppenheimer molecular dynamics, of the ground state, depends on the probability to be in excited states and the electron/nuclei mass ratio. The paper first proves an error estimate (depending on the electron/nuclei mass ratio and the probability to be in excited states) for this difference of microcanonical observables, assuming that molecular dynamics space-time averages converge, with a rate related to the maximal Lyapunov exponent. The error estimate is uniform in the number of particles and the analysis does not assume a uniform lower bound on the spectral gap of the electron operator and consequently the probability to be in excited states can be large. A numerical method to determine the probability to be in excited states is then presented, based on Ehrenfest molecular dynamics and stability analysis of a perturbed eigenvalue problem.

  10. Catastrophic Photo-z Errors and the Dark Energy Parameter Estimates with Cosmic Shear

    NASA Astrophysics Data System (ADS)

    Sun, Lei; Fan, Zu-Hui; Tao, Charling; Kneib, Jean-Paul; Jouvel, Stéphanie; Tilquin, André

    2009-07-01

    We study the impact of catastrophic errors occurring in the photometric redshifts of galaxies on cosmological parameter estimates with cosmic shear tomography. We consider a fiducial survey with nine-filter set and perform photo-z measurement simulations. It is found that a fraction of 1% galaxies at z spec ~ 0.4 is misidentified to be at z phot ~ 3.5. We then employ both ?2 fitting method and the extension of Fisher matrix formalism to evaluate the bias on the equation of state parameters of dark energy, w 0 and w a , induced by those catastrophic outliers. By comparing the results from both methods, we verify that the estimation of w 0 and w a from the fiducial five-bin tomographic analyses can be significantly biased. To minimize the impact of this bias, two strategies can be followed: (1) the cosmic shear analysis is restricted to 0.5 < z < 2.5, where catastrophic redshift errors are expected to be insignificant; (2) a spectroscopic survey is conducted for galaxies with 3 < z phot < 4. We find that the number of spectroscopic redshifts needed scales as N spec vprop f cata × A, where f cata = 1% is the fraction of catastrophic redshift errors (assuming a nine-filter photometric survey) and A is the survey area. For A = 1000deg2, we find that N spec > 320 and 860, respectively, in order to reduce the joint bias in (w 0, wa ) to be smaller than 2? and 1?. This spectroscopic survey (option 2) will improve the figure of merit of option 1 by a factor ×1.5 thus making such a survey strongly desirable.

  11. Size estimates of action-relevant space remain invariant in the face of systematic changes to postural stability and arousal.

    PubMed

    Cañal-Bruland, Rouwen; Aertssen, Anoek M; Ham, Laurien; Stins, John

    2015-07-01

    Perceptual estimates of action-relevant space have been reported to vary dependent on postural stability and concomitant changes in arousal. These findings contribute to current theories proposing that perception may be embodied. However, systematic manipulations to postural stability have not been tested, and a causal relationship between postural stability and perceptual estimates remains to be proven. We manipulated postural stability by asking participants to stand in three differently stable postures on a force plate measuring postural sway. Participants looked at and imagined traversing wooden beams of different widths and then provided perceptual estimates of the beams' widths. They also rated their level of arousal. Manipulation checks revealed that the different postures resulted in systematic differences in body sway. This systematic variation in postural stability was accompanied by significant differences in self-reported arousal. Yet, despite systematic differences in postural stability and levels of arousal perceptual estimates of the beams' widths remained invariant. PMID:25913547

  12. Application of asymptotic expansions of maximum likelihood estimators errors to gravitational waves from binary mergers: the single interferometer case

    E-print Network

    Michele Zanolin; Salvatore Vitale; Nicholas Makris

    2011-08-12

    In this paper we describe a new methodology to calculate analytically the error for a maximum likelihood estimate (MLE) for physical parameters from Gravitational wave signals. All the existing litterature focuses on the usage of the Cramer Rao Lower bounds (CRLB) as a mean to approximate the errors for large signal to noise ratios. We show here how the variance and the bias of a MLE estimate can be expressed instead in inverse powers of the signal to noise ratios where the first order in the variance expansion is the CRLB. As an application we compute the second order of the variance and bias for MLE of physical parameters from the inspiral phase of binary mergers and for noises of gravitational wave interferometers . We also compare the improved error estimate with existing numerical estimates. The value of the second order of the variance expansions allows to get error predictions closer to what is observed in numerical simulations. It also predicts correctly the necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties SNR and estimation errors. For example the timing match filtering becomes optimal only if the SNR is larger than the kurtosis of the gravitational wave spectrum.

  13. Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.

    PubMed

    Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

    2013-06-01

    A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has benefits in educating and engaging the public in science, but as demonstrated here, can also provide accurate estimates of biomass or forest carbon stocks. PMID:23865241

  14. Statistical tests against systematic errors in data sets based on the equality of residual means and variances from control samples: theory and applications.

    PubMed

    Henn, Julian; Meindl, Kathrin

    2015-03-01

    Statistical tests are applied for the detection of systematic errors in data sets from least-squares refinements or other residual-based reconstruction processes. Samples of the residuals of the data are tested against the hypothesis that they belong to the same distribution. For this it is necessary that they show the same mean values and variances within the limits given by statistical fluctuations. When the samples differ significantly from each other, they are not from the same distribution within the limits set by the significance level. Therefore they cannot originate from a single Gaussian function in this case. It is shown that a significance cutoff results in exactly this case. Significance cutoffs are still frequently used in charge-density studies. The tests are applied to artificial data with and without systematic errors and to experimental data from the literature. PMID:25727869

  15. Proceedings of the 1995 International Conference on Intelligent Robots and Systems (IROS '95), Pittsburgh, Pennsylvania, August 5-9, pp. 569-574. Correction of Systematic Odometry Errors in Mobile Robots

    E-print Network

    Borenstein, Johann

    ), Pittsburgh, Pennsylvania, August 5-9, pp. 569-574. Correction of Systematic Odometry Errors in Mobile Robots@engin.umich.edu Abstract 2. Properties of Odometry Errors This paper describes a practical method for reducing In a typical differential drive mobile robot incremental odometry errors caused by kinematic imperfections of a mobile

  16. A test for systematic errors in 40Ar\\/ 39Ar geochronology through comparison with U\\/Pb analysis of a 1.1Ga rhyolite

    Microsoft Academic Search

    Kyoungwon Min; Roland Mundil; Paul R. Renne; Kenneth R. Ludwig

    2000-01-01

    Important sources of systematic error in 40Ar\\/39Ar dating arise from uncertainties in the 40K decay constants and K\\/Ar isotopic data for neutron fluence monitors (standards). The activity data underlying the decay constants used in geochronology since 1977 are more dispersed than acknowledged by previous geochronologically oriented summaries, and compilations of essentially the same data in nuclear physics and chemistry literature

  17. Numerical Analysis A posteriori error estimations of a SUPG method for anisotropic diffusion-convection-reaction problems

    Microsoft Academic Search

    Thomas Apel; Serge Nicaise

    This Note presents an a posteriori residual error estimator for diffusion-convection-reaction problems approximated by a SUPG scheme on isotropic or anisotropic meshes in Rd , d = 2 or 3. This estimator is based on the jump of the flux and the interior residual of the approximated solution. It is constructed to work on anisotropic meshes which account for the

  18. Estimation of material fluxes in an estuarine cross section: A critical analysis of spatial measurement density and errors

    Microsoft Academic Search

    Bjtirn Kjerfve; L. HAROLD STEVENSON; JEFFREY A. PROEHL; THOMAS H. CHRZANOWSKI; WILEY M. KITCHENS

    1981-01-01

    Estuarine budget studies often suffer from uncertainties of net flux estimates in view of large temporal and spatial variabilities. Optimum spatial measurement density and material flux errors for a reasonably well mixed estuary were estimated by sampling 10 stations from surface to bottom simultaneously every hour for two tidal cycles in a 320-m-wide cross section in North Inlet, South Carolina.

  19. Exact Performance of Error Estimators for Discrete Classifiers Ulisses Braga-Neto1 and Edward Dougherty2,3,4,

    E-print Network

    Braga-Neto, Ulisses

    -one-out; cross-validation; bootstrap. #12;1 Introduction Discrete classification, also called categorical.632 bootstrap error estimator. Our results show that resubstitution is low-biased but much less variable than, and even comparable to the 0.632 bootstrap estimator, provided that classifier complexity is low

  20. Reducing satellite orbit error effects in near real-time GPS zenith tropospheric delay estimation for meteorology

    Microsoft Academic Search

    Maorong Ge; Eric Calais; Jennifer Haase

    2000-01-01

    We investigate the influence of using IGS predicted orbits for near real-time zenith tropospheric delay determination from GPS and implement a new processing strategy that allows the use of predicted orbits with minimal degradation of the ZTD estimates. Our strategy is based on the estimation of the three Keplerian parameters that represent the main error sources in predicted orbits (semi-major

  1. Wind Bias from Sub-optimal Estimation Due to Geophysical Modeling Error Paul E. Johnson and David G . Long

    E-print Network

    Long, David G.

    Wind Bias from Sub-optimal Estimation Due to Geophysical Modeling Error -Wind I Paul E. Johnson (which relates the wind to the normalized radar cross section, NRCS, of the ocean surface) is uncertainty in the NRCS for given wind conditions. When the estimated variability is in- cluded in the maximum likelihood

  2. Use of Expansion Factors to Estimate the Burden of Dengue in Southeast Asia: A Systematic Analysis

    PubMed Central

    Undurraga, Eduardo A.; Halasa, Yara A.; Shepard, Donald S.

    2013-01-01

    Background Dengue virus infection is the most common arthropod-borne disease of humans and its geographical range and infection rates are increasing. Health policy decisions require information about the disease burden, but surveillance systems usually underreport the total number of cases. These may be estimated by multiplying reported cases by an expansion factor (EF). Methods and Findings As a key step to estimate the economic and disease burden of dengue in Southeast Asia (SEA), we projected dengue cases from 2001 through 2010 using EFs. We conducted a systematic literature review (1995–2011) and identified 11 published articles reporting original, empirically derived EFs or the necessary data, and 11 additional relevant studies. To estimate EFs for total cases in countries where no empirical studies were available, we extrapolated data based on the statistically significant inverse relationship between an index of a country's health system quality and its observed reporting rate. We compiled an average 386,000 dengue episodes reported annually to surveillance systems in the region, and projected about 2.92 million dengue episodes. We conducted a probabilistic sensitivity analysis, simultaneously varying the most important parameters in 20,000 Monte Carlo simulations, and derived 95% certainty level of 2.73–3.38 million dengue episodes. We estimated an overall EF in SEA of 7.6 (95% certainty level: 7.0–8.8) dengue cases for every case reported, with an EF range of 3.8 for Malaysia to 19.0 in East Timor. Conclusion Studies that make no adjustment for underreporting would seriously understate the burden and cost of dengue in SEA and elsewhere. As the sites of the empirical studies we identified were not randomly chosen, the exact extent of underreporting remains uncertain. Nevertheless, the results reported here, based on a systematic analysis of the available literature, show general consistency and provide a reasonable empirical basis to adjust for underreporting. PMID:23437407

  3. Error bounds in diffusion tensor estimation using multiple-coil acquisition systems.

    PubMed

    Beltrachini, Leandro; von Ellenrieder, Nicolás; Muravchik, Carlos Horacio

    2013-10-01

    We extend the diffusion tensor (DT) signal model for multiple-coil acquisition systems. Considering the sum-of-squares reconstruction method, we compute the Cramér-Rao bound (CRB) assuming the widely accepted noncentral chi distribution. Within this framework, we assess the effect of noise in DT estimation and other measures derived from it, as a function of the number of acquisition coils, as well as other system parameters. We show the applications of CRB in many actual problems related to DT estimation: we compare different gradient field setup schemes proposed in the literature and show how the CRB can be used to choose a convenient one; we show that for fiber-type anisotropy tensors the ellipsoidal area ratio (EAR) can be estimated with less error than other scalar factors such as the fractional anisotropy (FA) or the relative anisotropy (RA), and that for this type of anisotropy tensors, increasing the number of coils is equivalent to increasing the signal-to-noise ratio, i.e., the information of the different coils can be regarded as independent. Also, we present results showing the CRB of several parameters for actual DT-MRI data. We conclude that the CRB is a valuable tool to optimal experiment design in DT-related studies. PMID:23806584

  4. A posteriori error estimations of a SUPG method for anisotropic difiusion-convection-reaction problems Estimations d'erreur a posteriori d'une methode SUPG pour des problµemes de difiusion-convection-reaction anisotropes

    Microsoft Academic Search

    Thomas Apel; Serge Nicaise

    This paper presents an a posteriori residual error estimator for difiusion-convection-reaction problems approxi- mated by a SUPG scheme on isotropic or anisotropic meshes inRd, d = 2 or 3. We built a residual error estimator based on the jump of the ?ux of the approximated solution. We prove the equivalence between the energy norm of the error and the estimator.

  5. Error estimates for a stabilized finite element method for the Oldroyd B model

    NASA Astrophysics Data System (ADS)

    Bensaada, Mohamed; Esselaoui, Driss

    2007-01-01

    In this paper, we study a new approximation scheme of transient viscoelastic fluid flow obeying an Oldroyd-B-type constitutive equation. The new stabilized formulation bases on the choice of a modified Euler method connected to the streamline upwinding Petrov-Galerkin (SUPG) method [M. Bensaada, D. Esselaoui, D. Sandri, Stabilization method for continuous approximation of transient convection problem, Numer. Methods Partial Differential Equations 21 (2004) 170-189], in order to stabilize the tensorial transport term of the Oldroyd derivative. Suppose that the continuous problem admits a sufficiently smooth and sufficiently small solution. A priori error estimates for the approximation in terms of the mesh parameter h and the time discretization parameter [Delta]t are derived.

  6. An ABC estimate of pedigree error rate: application in dog, sheep and cattle breeds.

    PubMed

    Leroy, G; Danchin-Burge, C; Palhiere, I; Baumung, R; Fritz, S; Mériaux, J C; Gautier, M

    2012-06-01

    On the basis of correlations between pairwise individual genealogical kinship coefficients and allele sharing distances computed from genotyping data, we propose an approximate Bayesian computation (ABC) approach to assess pedigree file reliability through gene-dropping simulations. We explore the features of the method using simulated data sets and show precision increases with the number of markers. An application is further made with five dog breeds, four sheep breeds and one cattle breed raised in France and displaying various characteristics and population sizes, using microsatellite or SNP markers. Depending on the breeds, pedigree error estimations range between 1% and 9% in dog breeds, 1% and 10% in sheep breeds and 4% in cattle breeds. PMID:22486502

  7. Diurnal extrapolation for the Earth Radiation Budget with geostationary satellites: examples and error estimates.

    NASA Astrophysics Data System (ADS)

    Viollier, M.; Kandel, R.; Raberanto, P.

    The study is mainly focused on combination of CERES with Meteosat-5. Examples of radiances and flux comparisons are shown between March 2000 and July 2002. They fix the errors due to possible calibration shift and narrowband-to-broadband conversion uncertainties. The Meteosat flux estimates at each half hour combined with the CERES fluxes are then used to compute monthly means. Compared to the ERBE-like extrapolation scheme, the changes are small in the LW domain, but significant in the SW (regional means: ˜ 20 Wm-2, 20S-20N means: ˜ 4 Wm-2 over the Meteosat-5 area). Improvements in this research field are expected from the analysis of the first Geostationary Radiation Budget instrument GERB/Meteosat-8.

  8. Improved atmospheric soundings and error estimates from analysis of AIRS/AMSU data

    NASA Astrophysics Data System (ADS)

    Susskind, Joel

    2007-09-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave CO II channel observations in the spectral region 700 cm -1 to 750 cm -1 are used exclusively for cloud clearing purposes, while shortwave CO II channels in the spectral region 2195 cm -1 to 2395 cm -1 are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  9. Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2007-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  10. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis

    PubMed Central

    Cornforth, Daniel M.; Matthews, Andrew; Brown, Sam P.; Raymond, Ben

    2015-01-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  11. Bacterial Cooperation Causes Systematic Errors in Pathogen Risk Assessment due to the Failure of the Independent Action Hypothesis.

    PubMed

    Cornforth, Daniel M; Matthews, Andrew; Brown, Sam P; Raymond, Ben

    2015-04-01

    The Independent Action Hypothesis (IAH) states that pathogenic individuals (cells, spores, virus particles etc.) behave independently of each other, so that each has an independent probability of causing systemic infection or death. The IAH is not just of basic scientific interest; it forms the basis of our current estimates of infectious disease risk in humans. Despite the important role of the IAH in managing disease interventions for food and water-borne pathogens, experimental support for the IAH in bacterial pathogens is indirect at best. Moreover since the IAH was first proposed, cooperative behaviors have been discovered in a wide range of microorganisms, including many pathogens. A fundamental principle of cooperation is that the fitness of individuals is affected by the presence and behaviors of others, which is contrary to the assumption of independent action. In this paper, we test the IAH in Bacillus thuringiensis (B.t), a widely occurring insect pathogen that releases toxins that benefit others in the inoculum, infecting the diamondback moth, Plutella xylostella. By experimentally separating B.t. spores from their toxins, we demonstrate that the IAH fails because there is an interaction between toxin and spore effects on mortality, where the toxin effect is synergistic and cannot be accommodated by independence assumptions. Finally, we show that applying recommended IAH dose-response models to high dose data leads to systematic overestimation of mortality risks at low doses, due to the presence of synergistic pathogen interactions. Our results show that cooperative secretions can easily invalidate the IAH, and that such mechanistic details should be incorporated into pathogen risk analysis. PMID:25909384

  12. TWO-STEP GMM ESTIMATION OF THE ERRORS-IN-VARIABLES MODEL USING HIGH-ORDER MOMENTS

    Microsoft Academic Search

    Timothy Erickson; Toni M. Whited

    2002-01-01

    We consider a multiple mismeasured regressor errors-in-variables model where the measurement and equation errors are independent and have moments of every order but otherwise are arbitrarily distributed. We present parsimonious two-step generalized method of moments (GMM) estimators that exploit overidentifying information contained in the high-order moments of residuals obtained by partialling out perfectly measured regressors. Using high-order moments requires that

  13. Outage Capacity of Spectrum Sharing Cognitive Radio with Channel Estimation Errors and Feedback Delay in Rayleigh Fading Environments

    NASA Astrophysics Data System (ADS)

    Xu, D.; Feng, Z.; Zhang, P.

    2013-04-01

    This paper considers a spectrum sharing cognitive radio (CR) network consisting of one secondary user (SU) and one primary user (PU) in Rayleigh fading environments. The channel state information (CSI) between the secondary transmitter (STx) and the primary receiver (PRx) is assumed to be imperfect. Particularly, this CSI is assumed to be not only having channel estimation errors but also outdated due to feedback delay, which is different from existing work. We derive the closed-form expression for the outage capacity of the SU with this imperfect CSI under the average interference power constraint at the PU. Analytical results confirmed by simulations are presented to show the effect of the imperfect CSI. Particularly, it is shown that the outage capacity of the SU is robust to the channel estimation errors and feedback delay for low outage probability and high channel estimation errors and feedback delay.

  14. Background Error Covariance Estimation Using Information from a Single Model Trajectory with Application to Ocean Data Assimilation

    NASA Technical Reports Server (NTRS)

    Keppenne, Christian L.; Rienecker, Michele; Kovach, Robin M.; Vernieres, Guillaume

    2014-01-01

    An attractive property of ensemble data assimilation methods is that they provide flow dependent background error covariance estimates which can be used to update fields of observed variables as well as fields of unobserved model variables. Two methods to estimate background error covariances are introduced which share the above property with ensemble data assimilation methods but do not involve the integration of multiple model trajectories. Instead, all the necessary covariance information is obtained from a single model integration. The Space Adaptive Forecast error Estimation (SAFE) algorithm estimates error covariances from the spatial distribution of model variables within a single state vector. The Flow Adaptive error Statistics from a Time series (FAST) method constructs an ensemble sampled from a moving window along a model trajectory.SAFE and FAST are applied to the assimilation of Argo temperature profiles into version 4.1 of the Modular Ocean Model (MOM4.1) coupled to the GEOS-5 atmospheric model and to the CICE sea ice model. The results are validated against unassimilated Argo salinity data. They show that SAFE and FAST are competitive with the ensemble optimal interpolation (EnOI) used by the Global Modeling and Assimilation Office (GMAO) to produce its ocean analysis. Because of their reduced cost, SAFE and FAST hold promise for high-resolution data assimilation applications.

  15. Systematic parameter estimation in data-rich environments for cell signalling dynamics

    PubMed Central

    Nim, Tri Hieu; Luo, Le; Clément, Marie-Véronique; White, Jacob K.; Tucker-Kellogg, Lisa

    2013-01-01

    Motivation: Computational models of biological signalling networks, based on ordinary differential equations (ODEs), have generated many insights into cellular dynamics, but the model-building process typically requires estimating rate parameters based on experimentally observed concentrations. New proteomic methods can measure concentrations for all molecular species in a pathway; this creates a new opportunity to decompose the optimization of rate parameters. Results: In contrast with conventional parameter estimation methods that minimize the disagreement between simulated and observed concentrations, the SPEDRE method fits spline curves through observed concentration points, estimates derivatives and then matches the derivatives to the production and consumption of each species. This reformulation of the problem permits an extreme decomposition of the high-dimensional optimization into a product of low-dimensional factors, each factor enforcing the equality of one ODE at one time slice. Coarsely discretized solutions to the factors can be computed systematically. Then the discrete solutions are combined using loopy belief propagation, and refined using local optimization. SPEDRE has unique asymptotic behaviour with runtime polynomial in the number of molecules and timepoints, but exponential in the degree of the biochemical network. SPEDRE performance is comparatively evaluated on a novel model of Akt activation dynamics including redox-mediated inactivation of PTEN (phosphatase and tensin homologue). Availability and implementation: Web service, software and supplementary information are available at www.LtkLab.org/SPEDRE Supplementary information: Supplementary data are available at Bioinformatics online. Contact: LisaTK@nus.edu.sg PMID:23426255

  16. Security-reliability performance of cognitive AF relay-based wireless communication system with channel estimation error

    NASA Astrophysics Data System (ADS)

    Gu, Qi; Wang, Gongpu; Gao, Li; Peng, Mugen

    2014-12-01

    In this paper, both the security and the reliability performance of the cognitive amplify-and-forward (AF) relay system are analyzed in the presence of the channel estimation error. The security and the reliability performance are represented by the outage probability and the intercept probability, respectively. Instead of perfect channel state information (CSI) predominantly assumed in the literature, a certain channel estimation algorithm and the influence of the corresponding channel estimation error are considered in this study. Specifically, linear minimum mean square error estimation (LMMSE) is utilized by the destination node and the eavesdropper node to obtain the CSI, and the closed form for the outage probability and that for the intercept probability are derived with the channel estimation error. It is shown that the transmission security (reliability) can be improved by loosening the reliability (security) requirement. Moreover, we compare the security and reliability performance of this relay-based cognitive radio system with those of the direct communication system without relay. Interestingly, it is found that the AF relay-based system has less reliability performance than the direct cognitive radio system; however, it can lower the sum of the outage probability and the intercept probability than the direct communication system. It is also found that there exists an optimal training number to minimize the sum of the outage probability and the intercept probability.

  17. Reliable random error estimation in the measurement of line-strength indices

    Microsoft Academic Search

    N. Cardiel; J. Gorgas; J. Cenarro; J. J. Gonzalez

    1998-01-01

    We present a new set of accurate formulae for the computation of random errors in the measurement of atomic and molecular line-strength indices. The new expressions are in excellent agreement with numerical simulations. We have found that, in some cases, the use of approximated equations can give misleading line-strength index errors. It is important to note that accurate errors can

  18. Mean squared error of prediction (MSEP) estimates for principal component regression (PCR) and partial least squares regression (PLSR)

    Microsoft Academic Search

    2004-01-01

    Abstract The paper presents results from simulations based on real data, comparing several competing,mean,squared,error of prediction,(MSEP) estimators on principal compo- nents regression (PCR) and partial least squares regression (PLSR): leave-one-out cross- validation, K-fold and adjusted K-fold cross-validation, the ordinary bootstrap estimate, the bootstrap,smoothed,cross-validation (BCV) estimate,and the 0.632 bootstrap,esti- mate. The overall performance of the estimators is compared in terms of

  19. Measuring the Effect of Inter-Study Variability on Estimating Prediction Error

    PubMed Central

    Ma, Shuyi; Sung, Jaeyun; Magis, Andrew T.; Wang, Yuliang; Geman, Donald; Price, Nathan D.

    2014-01-01

    Background The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in “batch-effects”) and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies. Methods Here we quantify the impact of these combined “study-effects” on a disease signature’s predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance. Results As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification. Conclusions We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when “sufficient” diversity has been achieved for learning a molecular signature likely to translate without significant loss of accuracy to new clinical settings. PMID:25330348

  20. Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling

    NASA Astrophysics Data System (ADS)

    Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

    2013-04-01

    Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

  1. Estimation of distance error by fuzzy set theory required for strength determination of HDR 192Ir brachytherapy sources

    PubMed Central

    Kumar, Sudhir; Datta, D.; Sharma, S. D.; Chourasiya, G.; Babu, D. A. R.; Sharma, D. N.

    2014-01-01

    Verification of the strength of high dose rate (HDR) 192Ir brachytherapy sources on receipt from the vendor is an important component of institutional quality assurance program. Either reference air-kerma rate (RAKR) or air-kerma strength (AKS) is the recommended quantity to specify the strength of gamma-emitting brachytherapy sources. The use of Farmer-type cylindrical ionization chamber of sensitive volume 0.6 cm3 is one of the recommended methods for measuring RAKR of HDR 192Ir brachytherapy sources. While using the cylindrical chamber method, it is required to determine the positioning error of the ionization chamber with respect to the source which is called the distance error. An attempt has been made to apply the fuzzy set theory to estimate the subjective uncertainty associated with the distance error. A simplified approach of applying this fuzzy set theory has been proposed in the quantification of uncertainty associated with the distance error. In order to express the uncertainty in the framework of fuzzy sets, the uncertainty index was estimated and was found to be within 2.5%, which further indicates that the possibility of error in measuring such distance may be of this order. It is observed that the relative distance li estimated by analytical method and fuzzy set theoretic approach are consistent with each other. The crisp values of li estimated using analytical method lie within the bounds computed using fuzzy set theory. This indicates that li values estimated using analytical methods are within 2.5% uncertainty. This value of uncertainty in distance measurement should be incorporated in the uncertainty budget, while estimating the expanded uncertainty in HDR 192Ir source strength measurement. PMID:24872605

  2. Robust Mean Squared Prediction Error Estimators of EBLUP of a Small Area Total Under the Fay-Herriot Model

    Microsoft Academic Search

    Shijie Chen; P. Lahiri; J. N. K. Rao

    In this paper we derive a second-order unbiased (or nearly unbiased) mean squared prediction error (MSPE) estimator of empirical best linear unbiased predictor (EBLUP) of a small area total for a non-normal extension to the well-known Fay- Herriot model. Specifically, we derive our MSPE estimator essentially assuming certain moment conditions on both the sampling and random effects distributions. The normality-based

  3. Estimation of the surface and mid-depth currents from Argo floats in the Pacific and error analysis

    Microsoft Academic Search

    Jiping Xie; Jiang Zhu

    2008-01-01

    With the large deployment, the Array for Real-time Geostrophic Oceanography program has great potential for measuring the ocean currents both on the surface and at mid-depth. However the positioning error of fixes in a trajectory varies from 150 m to 1000 m, and thus created difficulty for accurate estimations of the surface and mid-depth currents. Also the reliability of the estimated surface

  4. A?Posteriori Error Estimation for the Finite Element Method-of-lines Solution of Parabolic Problems

    Microsoft Academic Search

    Slimane Adjerid; Joseph E. Flaherty; Ivo Babuska

    1999-01-01

    Babuska and Yu constructed a posteriori estimates for finite element discretizationerrors of linear elliptic problems utilizing a dichotomy principalstating that the errors of odd-order approximations arise near element edgesas mesh spacing decreases while those of even-order approximations arise inelement interiors. We construct similar a posteriori estimates for the spatialerrors of finite element method-of-lines solutions of linear parabolic partialdifferential equations on...

  5. ERROR ESTIMATES FOR LARGE-SCALE ILL-POSED PROBLEMS L. REICHEL, G. RODRIGUEZ, AND S. SEATZU

    E-print Network

    Reichel, Lothar

    ERROR ESTIMATES FOR LARGE-SCALE ILL-POSED PROBLEMS L. REICHEL, G. RODRIGUEZ, AND S. SEATZU In memory of Gene H. Golub. Abstract. The computation of an approximate solution of linear discrete ill by replacing the given ill-posed problem by a nearby problem, whose solution is less sensitive to perturbation

  6. A Generalizability Theory Approach toward Estimating Standard Errors of Cutscores Set Using the Bookmark Standard Setting Procedure.

    ERIC Educational Resources Information Center

    Lee, Guemin; Lewis, Daniel M.

    The Bookmark Standard Setting Procedure (Lewis, Mitzel, and Green, 1996) is an item-response-theory-based standard setting method that has been widely implemented by state testing programs. The primary purposes of this study were to: (1) estimate standard errors for cutscores that result from Bookmark standard settings under a generalizability…

  7. Bit-Error Rate Estimation for Bang-Bang Clock and Data Recovery Circuit in High-Speed Serial Links

    E-print Network

    Bit-Error Rate Estimation for Bang-Bang Clock and Data Recovery Circuit in High-Speed Serial Links incorporating a bang-bang (BB) phase detector have been widely adopted in high-speed serial links due conditions. This technique is not applicable to bang-bang (BB) CDR circuits, which have gained popularity due

  8. An Exploration of Location Error Estimation David Dearman, Alex Varshavsky, Eyal de Lara, and Khai N. Truong

    E-print Network

    Toronto, University of

    An Exploration of Location Error Estimation David Dearman, Alex Varshavsky, Eyal de Lara, and Khai {dearman,walex,delara,khai}@cs.toronto.edu Abstract. Many existing localization systems generate location, LNCS 4717, pp. 181­198, 2007. © Springer-Verlag Berlin Heidelberg 2007 #12;182 D. Dearman et al. Fig. 1

  9. Speech Enhancement of Spectral Magnitude Bin Trajectories using Gaussian Mixture-Model based Minimum Mean-Square Error Estimators

    E-print Network

    Speech Enhancement of Spectral Magnitude Bin Trajectories using Gaussian Mixture-Model based mean-square error es- timators have been applied to speech enhancement in the tem- poral, transform (e estimator for 8 kHz telephone-quality speech. Index Terms: Speech enhancement, minimum mean-square er- ror

  10. Optimal classifier selection and negative bias in error rate estimation: An empirical study on high-dimensional prediction

    E-print Network

    Gerkmann, Ralf

    Titel: Optimal classifier selection and negative bias in error rate estimation: An empirical study. In this study I consider a total of 124 correct variants of classifiers (possibly including variable selection or tuning steps) within a cross-validation evaluation scheme. The classifiers are applied to original

  11. Spatial accounting for errors in LiDAR-derived products: Snow volume and snow water equivalent estimation

    NASA Astrophysics Data System (ADS)

    Tinkham, W. T.; Hoffman, C. M.; Falkowski, M. J.; Smith, A. M.; Link, T. E.; Marshall, H.

    2011-12-01

    Light Detection and Ranging (LiDAR) has become one of the most effective and reliable means of characterizing surface topography and vegetation structure. Most LiDAR-derived estimates such as vegetation height, snow depth, and floodplain boundaries rely on the accurate creation of digital terrain models (DTM). As a result of the importance of an accurate DTM in using LiDAR data to estimate snow depth, it is necessary to understand the variables that influence the DTM accuracy in order to assess snow depth error. A series of 4 x 4 m plots that were surveyed at 0.5 m spacing in a semi-arid catchment were used for training the Random Forests algorithm along with a series of 35 variables in order to spatially predict vertical error within a LiDAR derived DTM. The final model was utilized to predict the combined error resulting from snow volume and snow water equivalent estimates derived from a snow-free LiDAR DTM and a snow-on LiDAR acquisition of the same site. The methodology allows for a statistical quantification of the spatially-distributed error patterns that are incorporated into the estimation of snow volume and snow water equivalents from LiDAR.

  12. Estimating Error in Using Ambient PM2.5Concentrations as Proxies for Personal Exposures: A Review

    EPA Science Inventory

    Several methods have been used to account for measurement error inherent in using the ambient concentration of particulate matter < 2.5 µm (PM2.5, ug/m,3) as a proxy for personal exposure. Common features of such methods are their reliance on the estimated ...

  13. Defect Sampling in Global Error Estimation for ODEs and Method-Of-Lines PDEs Using Adjoint Methods

    E-print Network

    Utah, University of

    -dependent ordinary differential equations (odes) and partial differential equations (pdes) is well understood, see [41 Defect Sampling in Global Error Estimation for ODEs and Method-Of-Lines PDEs Using Adjoint of the defect in the numerical solution of initial value problem ordinary differential equations is considered

  14. Comparison between calorimeter and HLNC errors

    SciTech Connect

    Goldman, A.S. (Los Alamos National Lab., NM (United States)); De Ridder, P.; Laszlo, G. (International Atomic Energy Agency, Vienna (Austria))

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs.

  15. Estimating the burden of rheumatoid arthritis in Africa: A systematic analysis

    PubMed Central

    Dowman, Ben; Campbell, Ruth M.; Zgaga, Lina; Adeloye, Davies; Chan, Kit Yee

    2012-01-01

    Background Rheumatoid arthritis (RA) has an estimated worldwide prevalence of 1%. It is one of the leading causes of chronic morbidity in the developed world, but little is known about the disease burden in Africa. RA is often seen as a minor health problem and has been neglected in research and resource allocation throughout Africa despite potentially fatal systemic manifestations. This review aims to identify all relevant epidemiological literature pertaining to the occurrence of RA in Africa and calculate the prevalence and burden of disease. Methods A systematic literature review of Medline, Embase and Global Health Library retrieved a total of 335 publications, of which 10 population studies and 11 hospital studies met pre–defined minimum criteria for relevance and quality. Data on prevalence was extracted, analysed and compared between population and hospital studies. Differences between genders were also analysed. Findings The estimated crude prevalence of RA in Africa based on the available studies was 0.36% in 1990, which translates to a burden of 2.3 million affected individuals in 1990. Projections for the African population in 2010 based on the same prevalence rates would suggest a crude prevalence of 0.42% and the burden increased to 4.3 million. Only 2 population studies have been conducted after 1990, so projections for 2010 are uncertain. Hospital–based studies under–report the prevalence by about 6 times in comparison to population–based studies. Conclusion The availability of epidemiological information on RA in Africa is very limited. More studies need to be conducted to estimate the true burden and patterns of RA before appropriate health policies can be developed. PMID:23289081

  16. An estimate of the prevalence of epilepsy in Sub–Saharan Africa: A systematic analysis

    PubMed Central

    Paul, Abigail; Adeloye, Davies; George-Carey, Rhiannon; Kol?i?, Ivana; Grant, Liz; Chan, Kit Yee

    2012-01-01

    Background Epilepsy is a leading serious neurological condition worldwide and has particularly significant physical, economic and social consequences in Sub–Saharan Africa. This paper aims to contribute to the understanding of epilepsy prevalence in this region and how this varies by age and sex so as to inform understanding of the disease characteristics as well as the development of infrastructure, services and policies. Methods A parallel systematic analysis of Medline, Embase and Global Health returned 32 studies that satisfied pre–defined quality criteria. Relevant data was extracted, tabulated and analyzed. We modelled the available information and used the UN population figures for Africa to determine the age–specific and overall burden of epilepsy. Results Active epilepsy was estimated to affect 4.4 million people in Sub–Saharan Africa, whilst lifetime epilepsy was estimated to affect 5.4 million. The prevalence of active epilepsy peaks in the 20–29 age group at 11.5/1000 and again in the 40–49 age group at 8.2/1000. The lowest prevalence value of 3.1/1000 is seen in the 60+ age group. This binomial pattern is also seen in both men and women, with the second peak more pronounced in women at 14.6/1000. Conclusion The high prevalence of epilepsy, especially in young adults, has important consequences for both the workforce and community structures. An estimation of disease burden would be a beneficial outcome of further research, as would research into appropriate methods of improving health care for and tackling discrimination against people with epilepsy. PMID:23289080

  17. A practical method of estimating standard error of age in the fission track dating method

    USGS Publications Warehouse

    Johnson, N.M.; McGee, V.E.; Naeser, C.W.

    1979-01-01

    A first-order approximation formula for the propagation of error in the fission track age equation is given by PA = C[P2s+P2i+P2??-2rPsPi] 1 2, where PA, Ps, Pi and P?? are the percentage error of age, of spontaneous track density, of induced track density, and of neutron dose, respectively, and C is a constant. The correlation, r, between spontaneous are induced track densities is a crucial element in the error analysis, acting generally to improve the standard error of age. In addition, the correlation parameter r is instrumental is specifying the level of neutron dose, a controlled variable, which will minimize the standard error of age. The results from the approximation equation agree closely with the results from an independent statistical model for the propagation of errors in the fission-track dating method. ?? 1979.

  18. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error 1

    PubMed Central

    Carroll, Raymond J.; Delaigle, Aurore; Hall, Peter

    2011-01-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

  19. Systematic angle random walk estimation of the constant rate biased ring laser gyro.

    PubMed

    Yu, Huapeng; Wu, Wenqi; Wu, Meiping; Feng, Guohu; Hao, Ming

    2013-01-01

    An actual account of the angle random walk (ARW) coefficients of gyros in the constant rate biased rate ring laser gyro (RLG) inertial navigation system (INS) is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS) for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS. PMID:23447008

  20. Systematic Angle Random Walk Estimation of the Constant Rate Biased Ring Laser Gyro

    PubMed Central

    Yu, Huapeng; Wu, Wenqi; Wu, Meiping; Feng, Guohu; Hao, Ming

    2013-01-01

    An actual account of the angle random walk (ARW) coefficients of gyros in the constant rate biased rate ring laser gyro (RLG) inertial navigation system (INS) is very important in practical engineering applications. However, no reported experimental work has dealt with the issue of characterizing the ARW of the constant rate biased RLG in the INS. To avoid the need for high cost precise calibration tables and complex measuring set-ups, the objective of this study is to present a cost-effective experimental approach to characterize the ARW of the gyros in the constant rate biased RLG INS. In the system, turntable dynamics and other external noises would inevitably contaminate the measured RLG data, leading to the question of isolation of such disturbances. A practical observation model of the gyros in the constant rate biased RLG INS was discussed, and an experimental method based on the fast orthogonal search (FOS) for the practical observation model to separate ARW error from the RLG measured data was proposed. Validity of the FOS-based method was checked by estimating the ARW coefficients of the mechanically dithered RLG under stationary and turntable rotation conditions. By utilizing the FOS-based method, the average ARW coefficient of the constant rate biased RLG in the postulate system is estimated. The experimental results show that the FOS-based method can achieve high denoising ability. This method estimate the ARW coefficients of the constant rate biased RLG in the postulate system accurately. The FOS-based method does not need precise calibration table with high cost and complex measuring set-up, and Statistical results of the tests will provide us references in engineering application of the constant rate biased RLG INS. PMID:23447008