For comprehensive and current results, perform a real-time search at Science.gov.

1

Robust estimation of systematic errors of satellite laser range

. Methods for analyzing laser-ranging residuals to estimate station-dependent systematic errors and to eliminate outliers in\\u000a satellite laser ranges are discussed. A robust estimator based on an M-estimation principle is introduced. A practical calculation\\u000a procedure which provides a robust criterion with high breakdown point and produces robust initial residuals for following\\u000a iterative robust estimation is presented. Comparison of the results

Y. Yang; M. K. Cheng; C. K. Shum; B. D. Tapley

1999-01-01

2

Optimal input design for aircraft instrumentation systematic error estimation

NASA Technical Reports Server (NTRS)

A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

Morelli, Eugene A.

1991-01-01

3

The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10?5 pixels. PMID:22164021

Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan

2011-01-01

4

What Waveforms do Data Analysts Want?, or the dangers of systematic errors in parameter estimation

NASA Astrophysics Data System (ADS)

The measurement of astrophysical parameters of coalescing binaries, encoded in their gravitational-wave signature, is a crucial step for enabling gravitational-wave astronomy. However, our ability to make such measurements may be jeopardized by unknown systematic uncertainties in the waveform templates used for parameter estimation. I express my personal views on what waveform features are most crucial to maximize the astrophysical potential of advanced ground-based gravitational-wave detectors. I also discuss ongoing work on determining whether one particular source -- neutron-star binaries -- may be largely immune to systematic waveform uncertainties.

Mandel, Ilya

2012-03-01

5

Decomposing model systematic error

NASA Astrophysics Data System (ADS)

Seasonal forecasts made with a single model are generally overconfident. The standard approach to improve forecast reliability is to account for structural uncertainties through a multi-model ensemble (i.e., an ensemble of opportunity). Here we analyse a multi-model set of seasonal forecasts available through ENSEMBLES and DEMETER EU projects. We partition forecast uncertainties into initial value and structural uncertainties, as function of lead-time and region. Statistical analysis is used to investigate sources of initial condition uncertainty, and which regions and variables lead to the largest forecast error. Similar analysis is then performed to identify common elements of model error. Results of this analysis will be used to discuss possibilities to reduce forecast uncertainty and improve models. In particular, better understanding of error growth will be useful for the design of interactive multi-model ensembles.

Keenlyside, Noel; Shen, Mao-Lin

2014-05-01

6

The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles. PMID:24258878

Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

2013-11-01

7

A method for estimation of systematic errors of onboard measurement of angle of attack and sliding angle of an aircraft in\\u000a the course of flight tests using high precision velocity measurements performed by a satellite navigation system is proposed.\\u000a The main specific feature of the proposed method is that for providing compatibility of measurements of angle of attack and\\u000a sliding

O. N. Korsun; B. K. Poplavskii

2011-01-01

8

Estimating Bias Error Distributions

NASA Technical Reports Server (NTRS)

This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

Liu, Tian-Shu; Finley, Tom D.

2001-01-01

9

Estimating GPS Positional Error

NSDL National Science Digital Library

After instructing students on basic receiver operation, each student will make many (10-20) position estimates of 3 benchmarks over a week. The different benchmarks will have different views of the skies or vegetation cover. Each student will download their data into a spreadsheet and calculate horizontal and vertical errors which are collated into a class spreadsheet. The positions are sorted by error and plotted in a cumulative frequency plot. The students are encouraged to discuss the distribution, sources of error, and estimate confidence intervals. This exercise gives the students a gut feeling for confidence intervals and the accuracy of data. Students are asked to compare results from different types of data and benchmarks with different views of the sky. Uses online and/or real-time data Has minimal/no quantitative component Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions

Witte, Bill

10

Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.

Leslie Wade; Jolien D. E. Creighton; Evan Ochsner; Benjamin D. Lackey; Benjamin F. Farr; Tyson B. Littenberg; Vivien Raymond

2014-02-20

11

Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.

Wade, Leslie; Ochsner, Evan; Lackey, Benjamin D; Farr, Benjamin F; Littenberg, Tyson B; Raymond, Vivien

2014-01-01

12

NASA Astrophysics Data System (ADS)

Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.

Wade, Leslie; Creighton, Jolien D. E.; Ochsner, Evan; Lackey, Benjamin D.; Farr, Benjamin F.; Littenberg, Tyson B.; Raymond, Vivien

2014-05-01

13

Error Estimates of Theoretical Models: a Guide

This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

J. Dobaczewski; W. Nazarewicz; P. -G. Reinhard

2014-02-19

14

Bootstrap Techniques for Error Estimation

The design of a pattern recognition system requires careful attention to error estimation. The error rate is the most important descriptor of a classifier's performance. The commonly used estimates of error rate are based on the holdout method, the resubstitution method, and the leave-one-out method. All suffer either from large bias or large variance and their sample distributions are not

Anil K. Jain; Richard C. Dubes; Chaur-Chin Chen

1987-01-01

15

Systematic errors in long baseline oscillation experiments

This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

Harris, Deborah A.; /Fermilab

2006-02-01

16

Systematic Errors in measurement of b1

NASA Astrophysics Data System (ADS)

A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b1 structure function at Jefferson Lab.

Wood, S. A.

2014-10-01

17

Reducing Systematic Error in Weak Lensing Cluster Surveys

NASA Astrophysics Data System (ADS)

Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing ?-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ~3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ~2000 peaks based on our Subaru fields. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.

Utsumi, Yousuke; Miyazaki, Satoshi; Geller, Margaret J.; Dell'Antonio, Ian P.; Oguri, Masamune; Kurtz, Michael J.; Hamana, Takashi; Fabricant, Daniel G.

2014-05-01

18

More on Systematic Error in a Boyle's Law Experiment

ERIC Educational Resources Information Center

A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

McCall, Richard P.

2012-01-01

19

More on Systematic Error in a Boyle's Law Experiment

NASA Astrophysics Data System (ADS)

A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

McCall, Richard P.

2012-01-01

20

Control by model error estimation

NASA Technical Reports Server (NTRS)

Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

Likins, P. W.; Skelton, R. E.

1976-01-01

21

Adjoint Error Estimation for Linear Advection

An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

2011-03-30

22

Event Generator Validation and Systematic Error Evaluation for Oscillation Experiments

NASA Astrophysics Data System (ADS)

In this document I will describe the validation and tuning of the physics models in the GENIE neutrino event generator and briefly discuss how oscillation experiments make use of this information in the evaluation of model-related systematic errors.

Gallagher, H.

2009-09-01

23

SIMEX and standard error estimation in semiparametric measurement error models

SIMEX is a general-purpose technique for measurement error correction. There is a substantial literature on the application and theory of SIMEX for purely parametric problems, as well as for purely non-parametric regression problems, but there is neither application nor theory for semiparametric problems. Motivated by an example involving radiation dosimetry, we develop the basic theory for SIMEX in semiparametric problems using kernel-based estimation methods. This includes situations that the mismeasured variable is modeled purely parametrically, purely non-parametrically, or that the mismeasured variable has components that are modeled both parametrically and nonparametrically. Using our asymptotic expansions, easily computed standard error formulae are derived, as are the bias properties of the nonparametric estimator. The standard error method represents a new method for estimating variability of nonparametric estimators in semiparametric problems, and we show in both simulations and in our example that it improves dramatically on first order methods. We find that for estimating the parametric part of the model, standard bandwidth choices of order O(n?1/5) are sufficient to ensure asymptotic normality, and undersmoothing is not required. SIMEX has the property that it fits misspecified models, namely ones that ignore the measurement error. Our work thus also more generally describes the behavior of kernel-based methods in misspecified semiparametric problems. PMID:19609371

Apanasovich, Tatiyana V.; Carroll, Raymond J.; Maity, Arnab

2009-01-01

24

Systematic Parameter Errors in Inspiraling Neutron Star Binaries

NASA Astrophysics Data System (ADS)

The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

Favata, Marc

2014-03-01

25

Newborn screening for inborn errors of metabolism: a systematic review.

OBJECTIVES. To establish a database of literature and other evidence on neonatal screening programmes and technologies for inborn errors of metabolism. To undertake a systematic review of the data as a basis for evaluation of newborn screening for inborn errors of metabolism. To prepare an objective summary of the evidence on the appropriateness and need for various existing and possible neonatal screening programmes for inborn errors of metabolism in relation to the natural history of these diseases. To identify gaps in existing knowledge and make recommendations for required primary research. To make recommendations for the future development and organisation of neonatal screening for inborn errors of metabolism in the UK. HOW THE RESEARCH WAS CONDUCTED. There were three parts to the research. A systematic review of the literature on inborn errors of metabolism, neonatal screening programmes, new technologies for screening and economic factors. Inclusion and exclusion criteria were applied, and a working database of relevant papers was established. All selected papers were read by two or three experts and were critically appraised using a standard format. Seven criteria for a screening programme, based on the principles formulated by Wilson and Jungner (WHO, 1968), were used to summarise the evidence. These were as follows. Clinically and biochemically well-defined disorder. Known incidence in populations relevant to the UK. Disorder associated with significant morbidity or mortality. Effective treatment available. Period before onset during which intervention improves outcome. Ethical, safe, simple and robust screening test. Cost-effectiveness of screening. A questionnaire which was sent to all newborn screening laboratories in the UK. Site visits to assess new methodologies for newborn screening. The classical definition of an inborn error of metabolism was used (i.e., a monogenic disease resulting in deficient activity in a single enzyme in a pathway of intermediary metabolism). RESEARCH FINDINGS. INBORN ERRORS OF METABOLISM. Phenylketonuria (PKU) (incidence 1:12,000) fulfilled all the screening criteria and could be used as the 'gold standard' against which to review other disorders despite significant variation in methodologies, sample collection and timing of screening and inadequacies in the infrastructure for notification and continued care of identified patients. Of the many disorders of organic acid and fatty acid metabolism, a case can only be made for the introduction of newborn screening for glutaric aciduria type 1 (GA1; estimated incidence 1:40,000) and medium-chain acyl CoA dehydrogenase (MCAD) deficiency (estimated incidence 1:8000-1:15,000). Therapeutic advances for GA1 offer prevention of neurological damage but further investigation is required into the costs and benefits of screening for this disorder. MCAD deficiency is simply and cheaply treatable, preventing possible early death and neurological handicap. Neonatal screening for these diseases is dependent upon the introduction of tandem mass spectrometry (tandem MS). This screening could however also simultaneously detect some other commonly-encountered disorders of organic acid metabolism with a collective incidence of 1:15,000.(ABSTRACT TRUNCATED) PMID:9483156

Seymour, C A; Thomason, M J; Chalmers, R A; Addison, G M; Bain, M D; Cockburn, F; Littlejohns, P; Lord, J; Wilcox, A H

1997-01-01

26

Probabilistic Error Bounds for Simulation Quantile Estimators

Probabilistic Error Bounds for Simulation Quantile Estimators Xing Jin Â· Michael C. Fu Â· Xiaoping recently, Hsu and Nelson (1990) and Hesterberg and Nelson (1998) apply control variates to obtain variance

Fu, Michael

27

The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion. PMID:25232668

Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios

2014-01-01

28

Bayes Error Rate Estimation Using Classifier Ensembles

NASA Technical Reports Server (NTRS)

The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

Tumer, Kagan; Ghosh, Joydeep

2003-01-01

29

We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

Zhang Le; Timbie, Peter [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, Paul M.; Wandelt, Benjamin D. [Department of Physics, 1110 W Green Street, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Bunn, Emory F., E-mail: lzhang263@wisc.edu [Physics Department, University of Richmond, Richmond, VA 23173 (United States)

2013-06-01

30

Overcoming bias and systematic errors in next generation sequencing data

ABSTRACT: Considerable time and effort has been spent in developing analysis and quality assessment methods to allow the use of microarrays in a clinical setting. As is the case for microarrays and other high-throughput technologies, data from new high-throughput sequencing technologies are subject to technological and biological biases and systematic errors that can impact downstream analyses. Only when these issues

Margaret A Taub; Hector Corrada Bravo; Rafael A Irizarry

2010-01-01

31

Recent experiences with error estimation and adaptivity

Committee: Dr. T. Strouboulis In this work the, experiences with two classes of methods for the a- posteriori estimation of the error in finite-element approximations of elliptic boundary-value problems, namely residual and flux-projection error... me with the integration points for triangular elements and Pro- fessor Babuska for suggesting the numerical experiments with the three-types of mesh patterns. Additionally I would like to thank Mike Niestroy for his help in generating postscript...

Haque, Khalid Ansar

2012-06-07

32

The Effect of Systematic Error in Forced Oscillation Testing

NASA Technical Reports Server (NTRS)

One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

2012-01-01

33

Superconvergent lift estimates through adjoint error analysis

Superconvergent lift estimates through adjoint error analysis M.B. Giles and N.A. Pierce Oxford analysis to obÂ tain approximate values for integral quantities, such as lift and drag, which are twice, there are usually a few integral quantiÂ ties of primary concern, such as lift and drag on an aircraft, total mass

Pierce, Niles A.

34

Systematic Continuum Errors in the Ly? Forest and the Measured Temperature-Density Relation

NASA Astrophysics Data System (ADS)

Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T vprop ?? - 1) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly? forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in ? of the order of unity. This is quantified further using a simple semi-analytic model of the Ly? forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of ?. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of lang?(?)rang ? -0.1, while the error is increased to ?? ? 0.2 compared to ?? ? 0.1 in the absence of continuum errors.

Lee, Khee-Gan

2012-07-01

35

SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

2012-07-10

36

Reducing systematic errors in measurements made by a SQUID magnetometer

NASA Astrophysics Data System (ADS)

A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.

Kiss, L. F.; Kaptás, D.; Balogh, J.

2014-11-01

37

Systematic errors in cosmic microwave background polarization measurements

NASA Astrophysics Data System (ADS)

We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Müller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors through to power spectra and cosmological parameters. The method extends previous studies to an arbitrary scan strategy, and eliminates the need for time-consuming Monte Carlo simulations in the early phases of instrument and survey design. We illustrate the method with both simple parametrized forms for the systematics and with beams based on physical-optics simulations. Example results are given in the context of next-generation experiments targeting tensor-to-scalar ratios r ~ 0.01.

O'Dea, Daniel; Challinor, Anthony; Johnson, Bradley R.

2007-04-01

38

American Institute of Aeronautics and Astronautics 092407 1 Review of Discretization Error Estimators in Scientific Computing Christopher J. Roy1 Virginia Polytechnic Institute and State University systematically refined meshes. For complex scientific computing applications, the asymptotic range is often

Roy, Chris

39

Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere (\\

J. L. Davis; T. A. Herrinch; I. I. Shapiro; A. E. E. Rollers; G. Elgered

1985-01-01

40

Aim Raised intraocular pressure (IOP) increases the risk of glaucoma. Eye-care professionals measure IOP to screen for ocular hypertension (OHT) (IOP>21?mm?Hg) and to monitor glaucoma treatment. Tonometers commonly develop significant systematic measurement errors within months of calibration, and may not be verified often enough. There is no published evidence indicating how accurate tonometers should be. We analysed IOP measurements from a population study to estimate the sensitivity of detection of OHT to systematic errors in IOP measurements. Methods We analysed IOP data from 3654 participants in the Blue Mountains Eye Study, Australia. An inverse cumulative distribution indicating the proportion of individuals with highest IOP>21?mm?Hg was calculated. A second-order polynomial was fitted to the distribution and used to calculate over- and under-detection of OHT that would be caused by systematic measurement errors between ?4 and +4?mm?Hg. We calculated changes in the apparent prevalence of OHT caused by systematic errors in IOP. Results A tonometer that consistently under- or over-reads by 1?mm?Hg will miss 34% of individuals with OHT, or yield 58% more positive screening tests, respectively. Tonometers with systematic errors of ?4 and +4?mm?Hg would miss 76% of individuals with OHT and would over-detect OHT by a factor of seven. Over- and under-detection of OHT are not strongly affected by cutoff IOP. Conclusion We conclude that tonometers should be maintained and verified at intervals short enough to control systematic errors in IOP measurements to substantially less than 1?mm?Hg. PMID:23429411

Turner, M J; Graham, S L; Avolio, A P; Mitchell, P

2013-01-01

41

Systematic error elimination by correction of variable interferometer aberrations

NASA Astrophysics Data System (ADS)

The possibility of reducing the variable systematic error in interferometry measurements via a higher degree of compensation of residual aberrations of the interferometer is shown theoretically and verified experimentally. It is proven that, in the case when interferometer aberrations are variable, one reference hologram is insufficient to correct aberrations. However, using several reference holograms on which the aberrations are recorded, it is possible to reconstruct interference patterns in which the influence of residual aberrations on the behavior of bands is reduced to a definite minimal level.

But', A. I.; Lyalikov, A. M.

2011-05-01

42

Deconvolution of fluorescence decays and estimation errors

NASA Astrophysics Data System (ADS)

The non-iterative Prony's method is discussed for the deconvolution of multi-exponential fluorescence decays. The performance of these algorithms in the case of a two- exponential decay process is evaluated, using a Monte-Carlo simulation, in terms of the estimation errors caused by the signal noise. The results which are presented show that the performance of Prony's method can be greatly improved with the selection of an optimized observation window width and a few algorithm-related parameters. Comparison between Prony's method and the Marquardt least-squared-error algorithm is also made, showing the performance of the former is close to that of the latter with a 98% reduction in calculation running time. The applications of Prony's algorithms in real-time, quasi-distributed temperature sensor systems are discussed and the experimental results are presented to justify the use of the algorithms in Prony's method in practical double exponential fluorescence decay analysis.

Zhang, Zhiyi; Sun, Tong; Grattan, Kenneth T. V.; Palmer, Andrew W.

1997-05-01

43

Galaxy assembly bias: a significant source of systematic error in the galaxy-halo relationship

NASA Astrophysics Data System (ADS)

Methods that exploit galaxy clustering to constrain the galaxy-halo relationship, such as the halo occupation distribution (HOD) and conditional luminosity function (CLF), assume halo mass alone suffices to determine a halo's galaxy content. Yet, halo clustering strength depends upon properties other than mass, such as formation time, an effect known as assembly bias. If galaxy characteristics are correlated with these auxiliary halo properties, the basic assumption of standard HOD/CLF methods is violated. We estimate the potential for assembly bias to induce systematic errors in inferred halo occupation statistics. We construct realistic mock galaxy catalogues that exhibit assembly bias as well as companion mock catalogues with identical HODs, but with assembly bias removed. We fit HODs to the galaxy clustering in each catalogue. In the absence of assembly bias, the inferred HODs describe the true HODs well, validating the methodology. However, in all cases with assembly bias, the inferred HODs exhibit significant systematic errors. We conclude that the galaxy-halo relationship inferred from galaxy clustering is subject to significant systematic errors induced by assembly bias. Efforts to model and/or constrain assembly bias should be priorities as assembly bias is a threatening source of systematic error in galaxy evolution and precision cosmology studies.

Zentner, Andrew R.; Hearin, Andrew P.; van den Bosch, Frank C.

2014-10-01

44

Estimation of Bit Probability of Error Using Sync Word Error Rate Data

Assuming bit errors are independently distributed with a constant probability of error pe, it is shown that a simple estimator is highly efficient for estimation of pe. The estimator is based on a simple function of the number of sync words containing no bit errors. The estimator is shown to be maximum likelihood, minimum chi-square, and modified minimum chi-square when

C. Rohde

1970-01-01

45

Gap filling strategies and error in estimating annual soil respiration.

Soil respiration (Rsoil ) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap filling of automated records to produce a complete time series. Although many gap filling methodologies have been employed, there is no standardized procedure for producing defensible estimates of annual Rsoil . Here, we test the reliability of nine different gap filling techniques by inserting artificial gaps into 20 automated Rsoil records and comparing gap filling Rsoil estimates of each technique to measured values. We show that although the most commonly used techniques do not, on average, produce large systematic biases, gap filling accuracy may be significantly improved through application of the most reliable methods. All methods performed best at lower gap fractions and had relatively high, systematic errors for simulated survey measurements. Overall, the most accurate technique estimated Rsoil based on the soil temperature dependence of Rsoil by assuming constant temperature sensitivity and linearly interpolating reference respiration (Rsoil at 10 °C) across gaps. The linear interpolation method was the second best-performing method. In contrast, estimating Rsoil based on a single annual Rsoil - Tsoil relationship, which is currently the most commonly used technique, was among the most poorly-performing methods. Thus, our analysis demonstrates that gap filling accuracy may be improved substantially without sacrificing computational simplicity. Improved and standardized techniques for estimation of annual Rsoil will be valuable for understanding the role of Rsoil in the global C cycle. PMID:23504959

Gomez-Casanovas, Nuria; Anderson-Teixeira, Kristina; Zeri, Marcelo; Bernacchi, Carl J; DeLucia, Evan H

2013-06-01

46

Ultraspectral Sounding Retrieval Error Budget and Estimation

NASA Technical Reports Server (NTRS)

The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

2011-01-01

47

Background: Various forms of least-squares regression analyses are used to estimate average systematic error (bias) and its confidence interval in method-comparison studies. When assumptions that underlie a particular regression method are inappropriate for the data, errors in estimated statistics result. In this report, I present an improved method for regression analysis that is free from the usual simplifying assumptions and

Robert F. Martin

2000-01-01

48

Rigorous Error Estimates for Reynolds' Lubrication Approximation

NASA Astrophysics Data System (ADS)

Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ? approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

Wilkening, Jon

2006-11-01

49

Although the free energy perturbation procedure is exact when an infinite sample of configuration space is used, for finite sample size there is a systematic error resulting in hysteresis for forward and backward simulations. The qualitative behavior of this systematic error is first explored for a Gaussian distribution, then a first-order estimate of the error for any distribution is derived. To first order the error depends only on the fluctuations in the sample of potential energies, {Delta}E, and the sample size, n, but not on the magnitude of {Delta}E. The first-order estimate of the systematic sample-size error is used to compare the efficiencies of various computing strategies. It is found that slow-growth, free energy perturbation calculations will always have lower errors from this source than window-growth, free energy perturbation calculations for the same computing effort. The systematic sample-size errors can be entirely eliminated by going to thermodynamic integration rather than free energy perturbation calculations. When {Delta}E is a very smooth function of the coupling parameter, {lambda}, thermodynamic integration with a relatively small number of windows is the recommended procedure because the time required for equilibration is reduced with a small number of windows. These results give a method of estimating this sample-size hysteresis during the course of a slow-growth, free energy perturbation run. This is important because in these calculations time-lag and sample-size errors can cancel, so that separate methods of estimating and correcting for each are needed. When dynamically modified window procedures are used, it is recommended that the estimated sample-size error be kept constant, not that the magnitude of {Delta}E be kept constant. Tests on two systems showed a rather small sample-size hysteresis in slow-growth calculations except in the first stages of creating a particle, where both fluctuations and sample-size hysteresis are large.

Wood, R.H.; Muehlbauer, W.C.F. (Univ. of Delaware, Newark (United States)); Thompson, P.T. (Swarthmore Coll., PA (United States))

1991-08-22

50

Unsupervised Estimation of Classification and Regression Error Rates

Unsupervised Estimation of Classification and Regression Error Rates Pinar Donmez School University 5000 Forbes Ave., Pittsburgh PA 15213 www.lti.cs.cmu.edu 1 #12;Abstract Estimating the error rates these error rates using only unlabeled data. We prove consistency results for the framework and demonstrate

Eskenazi, Maxine

51

Systematic Review of the Balance Error Scoring System

Context: The Balance Error Scoring System (BESS) is commonly used by researchers and clinicians to evaluate balance.A growing number of studies are using the BESS as an outcome measure beyond the scope of its original purpose. Objective: To provide an objective systematic review of the reliability and validity of the BESS. Data Sources: PubMed and CINHAL were searched using Balance Error Scoring System from January 1999 through December 2010. Study Selection: Selection was based on establishment of the reliability and validity of the BESS. Research articles were selected if they established reliability or validity (criterion related or construct) of the BESS, were written in English, and used the BESS as an outcome measure. Abstracts were not considered. Results: Reliability of the total BESS score and individual stances ranged from poor to moderate to good, depending on the type of reliability assessed. The BESS has criterion-related validity with force plate measures; more difficult stances have higher agreement than do easier ones. The BESS is valid to detect balance deficits where large differences exist (concussion or fatigue). It may not be valid when differences are more subtle. Conclusions: Overall, the BESS has moderate to good reliability to assess static balance. Low levels of reliability have been reported by some authors. The BESS correlates with other measures of balance using testing devices. The BESS can detect balance deficits in participants with concussion and fatigue. BESS scores increase with age and with ankle instability and external ankle bracing. BESS scores improve after training. PMID:23016020

Bell, David R.; Guskiewicz, Kevin M.; Clark, Micheal A.; Padua, Darin A.

2011-01-01

52

A hardware error estimate for floating-point computations

NASA Astrophysics Data System (ADS)

We propose a hardware-computed estimate of the roundoff error in floating-point computations. The estimate is computed concurrently with the execution of the program and gives an estimation of the accuracy of the result. The intention is to have a qualitative indication when the accuracy of the result is low. We aim for a simple implementation and a negligible effect on the execution of the program. Large errors due to roundoff occur in some computations, producing inaccurate results. However, usually these large errors occur only for some values of the data, so that the result is accurate in most executions. As a consequence, the computation of an estimate of the error during execution would allow the use of algorithms that produce accurate results most of the time. In contrast, if an error estimate is not available, the solution is to perform an error analysis. However, this analysis is complex or impossible in some cases, and it produces a worst-case error bound. The proposed approach is to keep with each value an estimate of its error, which is computed when the value is produced. This error is the sum of a propagated error, due to the errors of the operands, plus the generated error due to roundoff during the operation. Since roundoff errors are signed values (when rounding to nearest is used), the computation of the error allows for compensation when errors are of different sign. However, since the error estimate is of finite precision, it suffers from similar accuracy problems as any floating-point computation. Moreover, it is not an error bound. Ideally, the estimate should be large when the error is large and small when the error is small. Since this cannot be achieved always with an inexact estimate, we aim at assuring the first property always, and the second most of the time. As a minimum, we aim to produce a qualitative indication of the error. To indicate the accuracy of the value, the most appropriate type of error is the relative error. However, this type has some anomalies that make it difficult to use. We propose a scaled absolute error, whose value is close to the relative error but does not have these anomalies. The main cost issue might be the additional storage and the narrow datapath required for the estimate computation. We evaluate our proposal and compare it with other alternatives. We conclude that the proposed approach might be beneficial.

Lang, Tomás; Bruguera, Javier D.

2008-08-01

53

Reducing Model Systematic Error over Tropical Pacific through SUMO Approach

NASA Astrophysics Data System (ADS)

Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. We take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differ only in their convection scheme. As climate models show large sensitivity to convection schemes, this approach may be a good basis for constructing a super model. We performed experiments with a machine learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Furthermore, the model with optimal coefficients has not only good performance over the surface temperature and precipitation, but also the positive Bjerknes feedback and the negative heat flux feedback match observations/reanalysis well, leading to a substantially improved simulation of ENSO.

Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory

2014-05-01

54

Reducing Model Systematic Error through Super Modelling - The Tropical Pacific

NASA Astrophysics Data System (ADS)

Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. The general approach to tackle uncertainty is to use an ensemble of several different GCMs. However, ensemble results may smear out major variability, such as the ENSO. Here we take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differ only in their convection scheme. As climate models show large sensitivity to convection schemes, this approach may be a good basis for constructing a super model. We performed experiments with a machine learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Furthermore, the model with optimal coefficients has not only good performance over the surface temperature and precipitation, but also the positive Bjerknes feedback and the negative heat flux feedback match observations/reanalysis well, leading to a substantially improved simulation of ENSO.

Shen, M.; Keenlyside, N. S.; Selten, F.; Wiegerinck, W.; Duane, G. S.

2013-12-01

55

Standard Error of Empirical Bayes Estimate in NONMEM® VI

The pharmacokinetics/pharmacodynamics analysis software NONMEM® output provides model parameter estimates and associated standard errors. However, the standard error of empirical Bayes estimates of inter-subject variability is not available. A simple and direct method for estimating standard error of the empirical Bayes estimates of inter-subject variability using the NONMEM® VI internal matrix POSTV is developed and applied to several pharmacokinetic models using intensively or sparsely sampled data for demonstration and to evaluate performance. The computed standard error is in general similar to the results from other post-processing methods and the degree of difference, if any, depends on the employed estimation options. PMID:22563254

Kang, Dongwoo; Houk, Brett E.; Savic, Radojka M.; Karlsson, Mats O.

2012-01-01

56

CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

NASA Technical Reports Server (NTRS)

Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

2012-01-01

57

A study of systematic errors in the PMD CamBoard nano

NASA Astrophysics Data System (ADS)

Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

Chow, Jacky C. K.; Lichti, Derek D.

2013-04-01

58

Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

NASA Technical Reports Server (NTRS)

A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

Adler, Robert; Gu, Guojun; Huffman, George

2012-01-01

59

A Note on Confidence Interval Estimation and Margin of Error

ERIC Educational Resources Information Center

Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

Gilliland, Dennis; Melfi, Vince

2010-01-01

60

Measurement and correction of systematic odometry errors in mobile robots

Odometry is the most widely used method for determining the momentary position of a mobile robot. This paper introduces practical methods for measuring and reducing odometry errors that are caused by the two dominant error sources in differential-drive mobile robots: 1) uncertainty about the effective wheelbase; and 2) unequal wheel diameters. These errors stay almost constant over prolonged periods of

Johann Borenstein; Liqiang Feng

1996-01-01

61

Estimation of Model Error Variances During Data Assimilation

NASA Technical Reports Server (NTRS)

Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data assimilation system.

Dee, Dick

2003-01-01

62

MEAN SQUARED ERROR ESTIMATION FOR SMALL AREAS WHEN THE SMALL AREA VARIANCES ARE ESTIMATED

This paper suggests a generalization to Prasad and Rao's estimator for the mean squared errors of small area estimators. This new approach uses the conditional mean squared error estimator of Rivest and Belmonte (2000) as an intermediate step in the derivation. It is used in this paper to incorporate, in the mean squared error estimator for a small area, uncertainty

Louis-Paul Rivest; Nathalie Vandal; L.-P. Rivest

2003-01-01

63

Sample covariance based estimation of Capon algorithm error probabilities

The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...

Richmond, Christ D.

64

NETRA: Interactive Display for Estimating Refractive Errors and Focal Range

We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive optical devices for automatic estimation of refractive correction exist, our goal is to ...

Pamplona, Vitor F.

65

Probabilistic state estimation in regimes of nonlinear error growth

State estimation, or data assimilation as it is often called, is a key component of numerical weather prediction (NWP). Nearly all implementable methods of state estimation suitable for NWP are forced to assume that errors ...

Lawson, W. Gregory, 1975-

2005-01-01

66

Mean-square error bounds for reduced-error linear state estimators

NASA Technical Reports Server (NTRS)

The mean-square error of reduced-order linear state estimators for continuous-time linear systems is investigated. Lower and upper bounds on the minimal mean-square error are presented. The bounds are readily computable at each time-point and at steady state from the solutions to the Ricatti and the Lyapunov equations. The usefulness of the error bounds for the analysis and design of reduced-order estimators is illustrated by a practical numerical example.

Baram, Y.; Kalit, G.

1987-01-01

67

A comparison of some error estimates for neural network models

A comparison of some error estimates for neural network models Robert Tibshirani Department of the predicted values y( â?? `; x i ). A reference for these techniques is Efron and Tibshirani (1993), especially called the delta method estimate of standard error (see Efron and Tibshirani 1993, chapter 21

Tibshirani, Robeert

68

Fisher classifier and its probability of error estimation

NASA Technical Reports Server (NTRS)

Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

Chittineni, C. B.

1979-01-01

69

Error Propagation in two-sensor 3D position estimation

The accuracy of 3D position estimation using two angles-only sensors (such as passive optical imagers) is investigated. Beginning with the basic multi-sensor triangulation equations used to estimate a 3D target position, error propagation equations are derived by taking the appropriate partial derivatives with respect to various measurement errors. Next the concept of gaussian measurement error is introduced and used to

John N. Sanders-Reed

2001-01-01

70

An analysis of the least-squares problem for the DSN systematic pointing error model

NASA Technical Reports Server (NTRS)

A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

Alvarez, L. S.

1991-01-01

71

Error concealment using multiresolution motion estimation

An error concealment scheme for MPEG video networking is presented. Cell loss occurs in the presence of network congestion and buffer overflow. This phenomenon of cell loss transforms into lost image blocks in the decoding process, which can severely degrade the viewing quality. The new method differs from the conventional concealment by its exploitation of spatial and temporal redundancies in

Augustine Tsai; Steven Wiener; Joseph Wilder

1995-01-01

72

Estimating errors in least-squares fitting

NASA Technical Reports Server (NTRS)

While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

Richter, P. H.

1995-01-01

73

Parameter estimation and error analysis in environmental modeling and computation

NASA Technical Reports Server (NTRS)

A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.

Kalmaz, E. E.

1986-01-01

74

PDE-constrained optimization with error estimation and control

NASA Astrophysics Data System (ADS)

The paper describes an algorithm for PDE-constrained optimization that controls numerical errors using error estimates and grid adaptation during the optimization process. A key aspect of the algorithm is the use of adjoint variables to estimate errors in the first-order optimality conditions. Multilevel optimization is used to drive the optimality conditions and their estimated errors below a specified tolerance. The error estimate requires two additional adjoint solutions, but only at the beginning and end of each optimization cycle. Moreover, the adjoint systems can be formed and solved with limited additional infrastructure beyond that found in typical PDE-constrained optimization algorithms. The approach is general and can accommodate both reduced-space and full-space formulations of the optimization problem. The algorithm is illustrated using the inverse design of a nozzle constrained by the quasi-one-dimensional Euler equations.

Hicken, J. E.; Alonso, J. J.

2014-04-01

75

Systematic rotation and receiver location error effects on parabolic trough annual performance

NASA Astrophysics Data System (ADS)

The effects of systematic geometrical design errors and random optical errors on the accuracy and subsequent economic viability of solar parabolic trough concentrating collectors were studied to enable designers to choose and specify necessary design and material constraints. A three-dimensional numerical model of a parabolic trough was analyzed with the inclusion of errors of pointing and mechanical deformation, and data from a typical meteorological year. System errors determined as percentage standard deviations provided the range of a study for systematic rotation and receiver location errors. The two types of errors were found to produce compounded effects. It is concluded that the designer must choose performance levels which take into account existence of errors, must know to what level the errors can be eliminated and at what cost, and should make provisions for monitoring the day-to-day on-line focus of the troughs.

Treadwell, G. W.; Grandjean, N. R.

1981-11-01

76

NASA Technical Reports Server (NTRS)

We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

2003-01-01

77

Component analysis of errors in satellite-based precipitation estimates

Satellite-based precipitation estimates have great potential for a wide range of critical applications, but their error characteristics need to be examined and understood. In this study, six (6) high-resolution, satellite-based precipitation data sets are evaluated over the contiguous United States against a gauge-based product. An error decomposition scheme is devised to separate the errors into three independent components, hit bias,

Yudong Tian; Christa D. Peters-Lidard; John B. Eylander; Robert J. Joyce; George J. Huffman; Robert F. Adler; Kuo-lin Hsu; F. Joseph Turk; Matthew Garcia; Jing Zeng

2009-01-01

78

Bootstrap Estimates of Standard Errors in Generalizability Theory

ERIC Educational Resources Information Center

Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

Tong, Ye; Brennan, Robert L.

2007-01-01

79

A Comparison of Some Error Estimates for Neural Network Models

We discuss a number of methods for estimating the standard error of predicted values from a multilayer perceptron. These methods include the delta method based on the Hessian, bootstrap estimators, and the sandwich estimator. The methods are described and compared in a number of examples. We find that the bootstrap methods perform best, partly because they capture variability due to

Robert Tibshirani

1996-01-01

80

Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering

Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work

Kreinovich, Vladik

81

Estimation in semiparametric transition measurement error models for longitudinal data

U.S. cities with the highest AIDS rates, one main outcome was whether an interviewee had hospitalEstimation in semiparametric transition measurement error models for longitudinal data BY WENQIN@hsph.harvard.edu August 21, 2006 SUMMARY We consider semiparametric transition measurement error models for longitudinal

Lin, Xihong

82

for the 1990-1991 snow season (November-April) have been examined. Dense vegetation, especially in the taiga are greatest, (in the taiga and tundra regions) are the major source of systematic error. Assumptions about how

Walker, Jeff

83

General solution for linearized systematic error propagation in vehicle odometry

Vehicle odometry is a nonlinear dynamical system in echelon form. Accordingly, a general solution can be written by solving the nonlinear equations in the correct order. Another implication of this structure is that a completely general solution to the linearized (perturbative) dynamics exists. The associated vector convolution integral is the general relationship between the output error and both the input

Alonzo Kelly

2001-01-01

84

NASA Astrophysics Data System (ADS)

This paper suggested simulation approaches for quantifying and reducing the effects of National Forest Inventory (NFI) plot location error on aboveground forest biomass and carbon stock estimation using the k-Nearest Neighbor (kNN) algorithm. Additionally, the effects of plot location error in pre-GPS and GPS NFI plots were compared. Two South Korean cities, Sejong and Daejeon, were chosen to represent the study area, for which four Landsat TM images were collected together with two NFI datasets established in both the pre-GPS and GPS eras. The effects of plot location error were investigated in two ways: systematic error simulation, and random error simulation. Systematic error simulation was conducted to determine the effect of plot location error due to mis-registration. All of the NFI plots were successively moved against the satellite image in 360° directions, and the systematic error patterns were analyzed on the basis of the changes of the Root Mean Square Error (RMSE) of kNN estimation. In the random error simulation, the inherent random location errors in NFI plots were quantified by Monte Carlo simulation. After removal of both the estimated systematic and random location errors from the NFI plots, the RMSE% were reduced by 11.7% and 17.7% for the two pre-GPS-era datasets, and by 5.5% and 8.0% for the two GPS-era datasets. The experimental results showed that the pre-GPS NFI plots were more subject to plot location error than were the GPS NFI plots. This study's findings demonstrate a potential remedy for reducing NFI plot location errors which may improve the accuracy of carbon stock estimation in a practical manner, particularly in the case of pre-GPS NFI data.

Jung, Jaehoon; Kim, Sangpil; Hong, Sungchul; Kim, Kyoungmin; Kim, Eunsook; Im, Jungho; Heo, Joon

2013-07-01

85

Quantifications of error propagation in slope-based wavefront estimations

We discuss error propagation in the slope-based and the difference-based wavefront estimations. The error propagation coefficient can be expressed as a function of the eigenvalues of the wavefront-estimation-related matrices, and we establish such functions for each of the basic geometries with the serial numbering scheme with which a square sampling grid array is sequentially indexed row by row. We first

Weiyao Zou; Jannick P. Rolland

2006-01-01

86

Analysis of systematic errors of the photoelectric astrolabe catalog SIPA1

NASA Astrophysics Data System (ADS)

Systematic errors of the catalog obtained with the observation of Photoelectic Astrolabe Mark I (PHA I) of Shaanxi Astronomical Observatory, mounted at Irkutsk, Russia, are analyzed with reference to the Hipparcos catalog. It is shown that the systematic errors of the astrolabe catalog can be removed effectively using the Hipparcos Catalog as reference and that a star position system with high precision can be obtained.

Lu, Chunlin; Xu, Jiayan; Li, Dongming; Liu, Jinmei

1999-10-01

87

NASA Astrophysics Data System (ADS)

Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

88

Stress Recovery and Error Estimation for 3-D Shell Structures

NASA Technical Reports Server (NTRS)

The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

Riggs, H. R.

2000-01-01

89

An Empirical State Error Covariance Matrix for Batch State Estimation

NASA Technical Reports Server (NTRS)

State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

Frisbee, Joseph H., Jr.

2011-01-01

90

This paper addresses an innovative method for the measurement and correction of systematic odometry errors caused by the kinematics imperfections in the differential drive mobile robots. An occasional systematic calibration of the mobile robot increases the odometric accuracy and reduces operational cost, as less frequent absolute positioning updates are required during the operation. Conventionally, the tests used for this purpose

Tanveer Abbas; M. Arif; W. Ahmed

2006-01-01

91

FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST

FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST Zhi Li1 Protection, error control 1. INTRODUCTION Advances in video and networking technologies have made and lightning strikes. Depending on the duration, impulse noise can be put into three categories, namely

Girod, Bernd

92

Analysis and reduction of tropical systematic errors through a unified modelling strategy

Systematic errors in climate models are usually addressed in a number of ways, but current methods often make use of model climatological fields as a starting point for model modification. This approach has limitations due to non-linear feedback mechanisms which occur over longer timescales and make the source of the errors difficult to identify. In a unified modelling environment, short-range

D. Copsey; A. Marshall; G. Martin; S. Milton; C. Senior; A. Sellar; A. Shelly

2009-01-01

93

Systematic Errors in Primary Acoustic Thermometry in the Range 2-20 K

Following a brief review of the fundamental principles of acoustic thermometry in the range 2-20 K, its systematic errors are analysed in depth. It is argued that the ultrasonic technique suffers from certain sources of error which are virtually impossible to assess quantitatively except on the basis of certain conjectures about the excitation of the thermometer's resonant cavity. These are

A. R. Colclough

1973-01-01

94

Diagnostic difficulty and error in primary care—a systematic review

diagnostic error\\/delay in primary care. Papers on system errors, patient delay, case reports, re- views, opinion pieces, studies not based on actual cases and studies not using a systematic sam- ple were excluded from the review. Twenty-one papers were included. All papers were assessed for quality using the GRADE system. Common features were identified across diseases and pre- sentations that

Olga Kostopoulou; Brendan C Delaney; Craig W Munro

2008-01-01

95

NASA Astrophysics Data System (ADS)

This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

2014-04-01

96

NASA Astrophysics Data System (ADS)

This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

2013-10-01

97

Modeling Radar Rainfall Estimation Uncertainties: Random Error Model

others, are also sources of discrepancy between radar estimates and rain gauge measurements Jordan et alModeling Radar Rainfall Estimation Uncertainties: Random Error Model A. AghaKouchak1 ; E. Habib2 compared with rain gauge measurements provide higher spatial and temporal resolutions. However, radar data

AghaKouchak, Amir

98

Estimated genotype error rates from bowhead whale microsatellite data

We calculate error rates using opportunistic replicate samples in the microsatellite data for bowhead whales. The estimated rate (1%\\/genotype) falls within normal ranges reviewed in this paper. The results of a jackknife analysis identified five individuals that were highly influential on estimates of Hardy-Weinberg equilibrium for four different markers. In each case, the influential individual was homozygous for a rare

Phillip A Morin; Richard G LeDuc; Eric Archer; Karen K Martien; Barbara L Taylor; Ryan Huebinger; John W. Bickham

99

Standard Errors for EM Estimates in Generalized Linear Models with

. As additional covariate among others the temperature was recorded. The bacteria content was determinedStandard Errors for EM Estimates in Generalized Linear Models with Random Effects Herwig Friedl of EM estimates in generalized linear models with random effects. Quadrature formulae are used

Friedl, Herwig

100

Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

ERIC Educational Resources Information Center

Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

Sass, Daniel A.

2010-01-01

101

Adaptive Error Estimation in Linearized Ocean General Circulation Models

NASA Technical Reports Server (NTRS)

Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

Chechelnitsky, Michael Y.

1999-01-01

102

Estimating model error covariance matrix parameters in extended Kalman filtering

NASA Astrophysics Data System (ADS)

The extended Kalman filter (EKF) is a popular state estimation method for nonlinear dynamical models. The model error covariance matrix is often seen as a tuning parameter in EKF, which is often simply postulated by the user. In this paper, we study the filter likelihood technique for estimating the parameters of the model error covariance matrix. The approach is based on computing the likelihood of the covariance matrix parameters using the filtering output. We show that (a) the importance of the model error covariance matrix calibration depends on the quality of the observations, and that (b) the estimation approach yields a well-tuned EKF in terms of the accuracy of the state estimates and model predictions. For our numerical experiments, we use the two-layer quasi-geostrophic model that is often used as a benchmark model for numerical weather prediction.

Solonen, A.; Hakkarainen, J.; Ilin, A.; Abbas, M.; Bibov, A.

2014-09-01

103

MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS

Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

R. ESTEP; ET AL

2000-06-01

104

Minimax Mean-Squared Error Location Estimation Using TOA Measurements

NASA Astrophysics Data System (ADS)

This letter deals with mobile location estimation based on a minimax mean-squared error (MSE) algorithm using time-of-arrival (TOA) measurements for mitigating the nonline-of-sight (NLOS) effects in cellular systems. Simulation results are provided for illustrating the minimax MSE estimator yields good performance than the other least squares and weighted least squares estimators under relatively low signal-to-noise ratio and moderately NLOS conditions.

Shen, Chih-Chang; Chang, Ann-Chen

105

Estimation of rod scale errors in geodetic leveling

Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

Craymer, Michael R.; Vaní?ek, Petr; Castle, Robert O.

1995-01-01

106

Using Chebyshev norms for estimating machine tool error parameters

Numerically controlled machine tools use their structure as a reference frame for monitoring and controlling the position of the cutting tool. The tool positioning accuracy and, eventually, the accuracy of parts machined (if other sources of errors are not overwhelming) are thus dependent on the accuracy of this reference. Manufacturing and assembly are known to leave behind substantial errors in this reference frame and quasistatic effects (thermal and flexural) can cause significant changes in these errors. The problem of characterizing and compensating these errors is thus one of immediate importance. A good deal of research has been reported on the modeling of these errors and, given an adequately parameterized model, the production of compensated tool motion. Scant attention has been paid to the identification of model parameters, and it would not be far from the truth to state that least squares has been the universally used criterion for identifying model parameters. That is, parameters are identified for an error model so that the sum of squares of the difference between the actual machine errors and model predictions are minimized. Thus, for a machine using a model developed with such parameters and visiting a number of points in its workspace, nothing can be said of its accuracy at any one point. All that can be said is that if the sample of points visited represents an unbiased sample of the workspace, the mean squared errors of the machine at these points is minimized. This research develops a new method for the estimation of the error model parameters of a machine tool using a Chebyshev norm. This type of analysis forces strict bounds on the errors produced by the machine and can be used as a performance measure for the effectiveness of software compensation schemes on machine tools. As with any scheme, such a parameter estimation technique is not a universally superior technique. Its appropriateness for a particular situation is a matter of judgement.

Tajbakhsh, H.; Abadin, A.; Ferreira, P.M. [Univ. of Illinois, Urbana, IL (United States)

1996-12-31

107

Drug treatment of inborn errors of metabolism: a systematic review

Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field. PMID:23532493

Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed

2013-01-01

108

Analysis of systematic error in “bead method” measurements of meteorite bulk volume and density

NASA Astrophysics Data System (ADS)

The Archimedean glass bead method for determining meteorite bulk density has become widely applied. We used well characterized, zero-porosity quartz and topaz samples to determine the systematic error in the glass bead method to support bulk density measurements of meteorites for our ongoing meteorite survey. Systematic error varies according to bead size, container size and settling method, but in all cases is less than 3%, and generally less than 2%. While measurements using larger containers (above 150 cm 3) exhibit no discernible systematic error but much reduced precision, higher precision measurements with smaller containers do exhibit systematic error. For a 77 cm 3 container using 40-80 ?m diameter beads, the systematic error is effectively eliminated within measurement uncertainties when a "secured shake" settling method is employed in which the container is held securely to the shake platform during a 5 s period of vigorous shaking. For larger 700-800 ?m diameter beads using the same method, bulk volumes are uniformly overestimated by 2%. Other settling methods exhibit sample-volume-dependent biases. For all methods, reliability of measurement is severely reduced for samples below ˜5 cm 3 (10-15 g for typical meteorites), providing a lower-limit selection criterion for measurement of meteoritical samples.

Macke S. J., Robert J.; Britt, Daniel T.; Consolmagno S. J., Guy J.

2010-02-01

109

SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

2007-08-25

110

Estimating the Standard Error of Robust Regression Estimates.

National Technical Information Service (NTIS)

Over the last two decades there has been much interest in the statistical literature in alternative methods to least squares for fitting equations to data. During this time a large number of estimates of regression coefficients have been proposed that are...

S. J. Sheather, T. P. Hettmansperger

1987-01-01

111

The impact of orbital errors on the estimation of satellite clock errors and PPP

NASA Astrophysics Data System (ADS)

Precise satellite orbit and clocks are essential for providing high accuracy real-time PPP (Precise Point Positioning) service. However, by treating the predicted orbits as fixed, the orbital errors may be partially assimilated by the estimated satellite clock and hence impact the positioning solutions. This paper presents the impact analysis of errors in radial and tangential orbital components on the estimation of satellite clocks and PPP through theoretical study and experimental evaluation. The relationship between the compensation of the orbital errors by the satellite clocks and the satellite-station geometry is discussed in details. Based on the satellite clocks estimated with regional station networks of different sizes (?100, ?300, ?500 and ?700 km in radius), results indicated that the orbital errors compensated by the satellite clock estimates reduce as the size of the network increases. An interesting regional PPP mode based on the broadcast ephemeris and the corresponding estimated satellite clocks is proposed and evaluated through the numerical study. The impact of orbital errors in the broadcast ephemeris has shown to be negligible for PPP users in a regional network of a radius of ?300 km, with positioning RMS of about 1.4, 1.4 and 3.7 cm for east, north and up component in the post-mission kinematic mode, comparable with 1.3, 1.3 and 3.6 cm using the precise orbits and the corresponding estimated clocks. Compared with the DGPS and RTK positioning, only the estimated satellite clocks are needed to be disseminated to PPP users for this approach. It can significantly alleviate the communication burdens and therefore can be beneficial to the real time applications.

Lou, Yidong; Zhang, Weixing; Wang, Charles; Yao, Xiuguang; Shi, Chuang; Liu, Jingnan

2014-10-01

112

Error propagation and scaling for tropical forest biomass estimates.

The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093

Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

2004-01-01

113

Error Estimation for the Linearized Auto-Localization Algorithm

The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

Guevara, Jorge; Jimenez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

2012-01-01

114

Error estimation for the linearized auto-localization algorithm.

The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

2012-01-01

115

Estimating the 4DVAR analysis error of GODAE products

We explore the ocean circulation estimates obtained by assimilating observational products made available by the Global Ocean\\u000a Data Assimilation Experiment (GODAE) and other sources in an incremental, four-dimensional variational data assimilation system\\u000a for the Intra-Americas Sea. Estimates of the analysis error (formally, the inverse Hessian matrix) are computed during the\\u000a assimilation procedure. Comparing the impact of differing sea surface height

Brian S. Powell; Andrew M. Moore

2009-01-01

116

ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

NASA Technical Reports Server (NTRS)

The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and range rate. The observation errors considered are bias, timing, transit time, tracking station location, polar motion, solid earth tidal displacement, ocean loading displacement, tropospheric and ionospheric refraction, and space plasma. The force model elements considered are the earth's potential, the gravitational constant, solid earth tides, polar radiation pressure, earth reflected radiation, atmospheric drag, and thrust errors. The errors are propagated along the satellite orbital path. The ORAN program is written in FORTRAN IV and ASSEMBLER for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 570K of 8-bit bytes. The ORAN program was developed in 1973 and was last updated in 1980.

Putney, B.

1994-01-01

117

Condition and Error Estimates in Numerical Matrix Computations

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

Konstantinov, M. M. [University of Architecture, Civil Engineering and Geodesy, 1046 Sofia (Bulgaria); Petkov, P. H. [Technical University of Sofia, 1000 Sofia (Bulgaria)

2008-10-30

118

Influence of errors in natural mortality estimates in cohort analysis

the Taylor series type approximations or simulations employed in earlier studies. The case in which a cohort-shaped time trend, rather than the monotonic adjustment one might anticipate. In the analysis that follows weInfluence of errors in natural mortality estimates in cohort analysis G. Mertz and R.A. Myers

Myers, Ransom A.

119

Bootstrap Standard Error Estimates in Dynamic Factor Analysis

ERIC Educational Resources Information Center

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

Zhang, Guangjian; Browne, Michael W.

2010-01-01

120

DEB: definite error bounded tangent estimator for digital curves.

We propose a simple and fast method for tangent estimation of digital curves. This geometric-based method uses a small local region for tangent estimation and has a definite upper bound error for continuous as well as digital conics, i.e., circles, ellipses, parabolas, and hyperbolas. Explicit expressions of the upper bounds for continuous and digitized curves are derived, which can also be applied to nonconic curves. Our approach is benchmarked against 72 contemporary tangent estimation methods and demonstrates good performance for conic, nonconic, and noisy curves. In addition, we demonstrate a good multigrid and isotropic performance and low computational complexity of O(1) and better performance than most methods in terms of maximum and average errors in tangent computation for a large variety of digital curves. PMID:25122569

Prasad, Dilip K; Leung, Maylor K H; Quek, Chai; Brown, Michael S

2014-10-01

121

Error estimates and specification parameters for functional renormalization

NASA Astrophysics Data System (ADS)

We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS-BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

2013-07-01

122

Systematic errors in ground heat flux estimation and their correction

Incoming radiation forcing at the land surface is partitioned among the components of the surface energy balance in varying proportions depending on the time scale of the forcing. Based on a land-atmosphere analytic continuum ...

Gentine, P.

123

Systematic error aspects of gauge-measured solid precipitation in the Arctic, Barrow, Alaska

This report provides insight into systematic errors of gauge-measured precipitation in the Arctic by the precipitation gauge intercomparison experiment at Barrow, Alaska. Reference gauges and various national standard gauges used in the Arctic regions were installed. The bias of trace precipitation was recorded with high frequency and varied widely from 6 to 130% increase of the gauge-measured amounts due to

Konosuke Sugiura; Daqing Yang; Tetsuo Ohata

2003-01-01

124

Systematic errors in optical-flow velocimetry for turbulent flows and flames

Systematic errors in optical-flow velocimetry for turbulent flows and flames Joseph Fielding, Marshall B. Long, Gabriel Fielding, and Masaharu Komiyama Optical-flow OF velocimetry is based to particle-image velocimetry in turbulent flows. The performance of the technique is examined by direct

Long, Marshall B.

125

Coherent laser radar performance in the presence of random and systematic errors

The performance of a coherent range-Doppler imaging laser radar depends on the quality of the waveform generation and processing techniques. Here, the performance of wideband waveforms in the presence of systematic and random amplitude and phase errors which occur during the generation and processing of these waveforms is studied. Performance results are presented for three types of complex pulse train

A. L. Kachelmyer

1989-01-01

126

The objective of this systematic review is to analyse the relative risk reduction on medication error and adverse drug events (ADE) by computerized physician order entry systems (CPOE). We included controlled field studies and pretest-posttest studies, evaluating all types of CPOE systems, drugs and clinical settings. We present the results in evidence tables, calculate the risk ratio with 95% confidence

ELSKE AMMENWERTH; PETRA SCHNELL-INDERST; CHRISTOF MACHAN; UWE SIEBERT

2008-01-01

127

This paper investigates the effect of simple systematic error, or bias (i.e., in the magnitude of data or an associated model), on physical parameters retrieved by least squares algorithms from observations that are indexed by an independent variable. This factor is now of critical interest with the advent of global, space-based ultraviolet remote sensing of thermospheric and ionospheric composition by

J. M. Picone

2008-01-01

128

Barcode Medication Administration System (BCMA) Errors A Systematic Review Rupa Mitra1

Barcode Medication Administration System (BCMA) Errors Â A Systematic Review Rupa Mitra1 1 IU School of Informatics Implementation of Barcode Medication Administration (BCMA) improves the accuracy of administration of medication. These systems can improve medication safety by ensuring that correct medication

Zhou, Yaoqi

129

Background: Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children.Objective: To synthesise peer reviewed knowledge on children’s medication errors and on recommendations to improve paediatric medication safety by a systematic

Marlene R Miller; Karen A Robinson; Lisa H Lubomski; Michael L Rinke; Peter J Pronovost

2007-01-01

130

Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

NASA Technical Reports Server (NTRS)

Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

2002-01-01

131

NASA Technical Reports Server (NTRS)

Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

132

Discretization error estimation and exact solution generation using the method of nearby problems.

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)

2011-10-01

133

Ideal Bootstrap Estimation of Expected Prediction Error for -Nearest5 Neighbor Classifiers classification rules.5 5 Bootstrap methods, widely used for estimating the expected prediction error of classification rules, are motivated by the objective of calculating the ideal bootstrap estimate of expected

Steele, Brian

134

Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

2014-01-01

135

NASA Technical Reports Server (NTRS)

The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.

Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.

1993-01-01

136

A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

NASA Technical Reports Server (NTRS)

A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

Simon, Donald L.; Garg, Sanjay

2010-01-01

137

Improved Soundings and Error Estimates using AIRS/AMSU Data

NASA Technical Reports Server (NTRS)

AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

Susskind, Joel

2006-01-01

138

Effects of measurement error on estimating biological half-life.

Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam. PMID:1483030

Caudill, S P; Pirkle, J L; Michalek, J E

1992-01-01

139

Effects of measurement error on estimating biological half-life

Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.

Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. (Centers for Disease Control, Atlanta, GA (United States))

1992-10-01

140

NASA Technical Reports Server (NTRS)

The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

Larson, T. J.; Ehernberger, L. J.

1985-01-01

141

This paper presents the complex dielectric permittivity and loss tangent measurements for a selection of advanced polymer-based thermoplastics in the Q-band, V-band and W-band frequencies and discusses in detail the random and systematic errors that arise in the experimental setup. These plastics are reported to have exceptional mechanical, thermal and electrical properties and are extensively used as electrical insulating materials,

Nahid Rahman; Ana I. Medina Ayala; Konstantin A. Korolev; Mohammed N. Afsar; Rudy Cheung; Maurice Aghion

2009-01-01

142

Analysis of the Systematic Errors Found in the Kipp & Zonen Large-Aperture Scintillometer

Studies have shown a systematic error in the Kipp & Zonen large-aperture scintillometer (K&ZLAS) measurements of the sensible heat flux, H. We improved on these studies and compared four K&ZLASs with a Wageningen large-aperture scintillometer at the Chilbolton Observatory. The scintillometers were installed such that their footprints were the same and independent flux measurements were made along the measurement path.

B. van Kesteren; O. K. Hartogensis

2011-01-01

143

Analysis of the Systematic Errors Found in the Kipp & Zonen Large-Aperture Scintillometer

Studies have shown a systematic error in the Kipp & Zonen large-aperture scintillometer (K&ZLAS) measurements of the sensible heat flux, H. We improved on these studies and compared four K&ZLASs with a Wageningen large-aperture scintillometer at the Chilbolton Observatory. The scintillometers were installed such that their footprints were the same and independent flux measurements were made along the measurement path.

B. van Kesteren; O. K. Hartogensis

2010-01-01

144

Analysis of the Systematic Errors Found in the Kipp & Zonen Large-Aperture Scintillometer

Studies have shown a systematic error in the Kipp & Zonen large-aperture scintillometer (K&ZLAS) measurements of the sensible\\u000a heat flux, H. We improved on these studies and compared four K&ZLASs with a Wageningen large-aperture scintillometer at the Chilbolton\\u000a Observatory. The scintillometers were installed such that their footprints were the same and independent flux measurements\\u000a were made along the measurement path.

B. Van Kesteren; O. K. Hartogensis

2011-01-01

145

Estimating Uncertainty in Global Climate Projections using Gaussian Error Propagation

NASA Astrophysics Data System (ADS)

Current techniques to estimate uncertainty in projections of future climate emphasize multi-model or single model ensembles, which require huge computational resources and are impractical except for large integrated efforts like the IPCC Assessment Reports. We developed techniques to estimate uncertainty for a single climate projection by comparison with observations and Gaussian error propagation. We demonstrate the technique for a climate projection from 1850 to 2300 for the A1B scenario using Community Earth System Model (CESM). We estimated overall model bias and uncertainty by comparing simulated and observed global climate sensitivity (change in temperature per change in atmospheric CO2 concentration). We estimated the relative contribution of uncertainty in fossil fuel emissions and the Permafrost Carbon Feedback to total uncertainty in simulated global surface air temperature. We verified our results by comparing to a more traditional ensemble of CESM perturbation simulations. Our results indicate that uncertainty in the reported fossil fuel emissions dominate total uncertainty in simulated surface air temperature. These results demonstrate the feasibility of estimating uncertainty for a single climate projection without multi-model or single model ensembles.

Schaefer, K. M.; Williams, C. A.; Schwalm, C.; Li, Z.

2012-12-01

146

Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

NASA Astrophysics Data System (ADS)

Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more successful CGCM performance, as the effects of remote biases may override them. Therefore, efforts to reduce CGCM errors cannot be narrowly focused on particular regions.

Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

2014-05-01

147

Are Low-order Covariance Estimates Useful in Error Analyses?

NASA Astrophysics Data System (ADS)

Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb-Shanno algorithm). Two cases are examined: a toy problem in which CO2 fluxes for 3 latitude bands are estimated for only 2 time steps per year, and for the monthly fluxes for 22 regions across 1988-2003 solved for in the TransCom3 interannual flux inversion of Baker, et al (2005). The usefulness of the uncertainty estimates will be assessed as a function of the number of minimization steps used in the variational approach; this will help determine whether they will also be useful in the high-resolution cases that we would most like to apply the non-exact methods to. Baker, D.F., et al., TransCom3 inversion intercomparison: Impact of transport model errors on the interannual variability of regional CO2 fluxes, 1988-2003, Glob. Biogeochem. Cycles, doi:10.1029/2004GB002439, 2005, in press. Rayner, P.J., R.M. Law, D.M. O'Brien, T.M. Butler, and A.C. Dilley, Global observations of the carbon budget, 3, Initial assessment of the impact of satellite orbit, scan geometry, and cloud on measuring CO2 from space, J. Geophys. Res., 107(D21), 4557, doi:10.1029/2001JD000618, 2002.

Baker, D. F.; Schimel, D.

2005-12-01

148

Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

NASA Technical Reports Server (NTRS)

Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

Morelli, Eugene a.

2006-01-01

149

Estimation and sample size calculations for correlated binary error rates of biometric in FARs and FRRs is the need to de- termine the sample size necessary to estimate a given error rate to within a specified margin of error,e. g. Snedecor and Cochran (1995). Sample size calcula- tions exist

Schuckers, Michael E.

150

Precision calibration and systematic error reduction in the long trace profiler

The long trace profiler (LTP) has become the instrument of choice for surface figure testing and slope error measurement of mirrors used for synchrotron radiation and x-ray astronomy optics. In order to achieve highly accurate measurements with the LTP, systematic errors need to be reduced by precise angle calibration and accurate focal plane position adjustment. A self-scanning method is presented to adjust the focal plane position of the detector with high precision by use of a pentaprism scanning technique. The focal plane position can be set to better than 0.25 mm for a 1250-mm-focal-length Fourier-transform lens using this technique. The use of a 0.03-arcsec-resolution theodolite combined with the sensitivity of the LTP detector system can be used to calibrate the angular linearity error very precisely. Some suggestions are introduced for reducing the system error. With these precision calibration techniques, accuracy in the measurement of figure and slope error on meter-long mirrors is now at a level of about 1 {mu}rad rms over the whole testing range of the LTP. (c) 2000 Society of Photo-Optical Instrumentation Engineers.

Qian, Shinan; Sostero, Giovanni [Sincrotrone Trieste, 34012 Basovizza, Trieste, (Italy)] [Sincrotrone Trieste, 34012 Basovizza, Trieste, (Italy); Takacs, Peter Z. [Brookhaven National Laboratory, Building 535B, Upton, New York 11973 (United States)] [Brookhaven National Laboratory, Building 535B, Upton, New York 11973 (United States)

2000-01-01

151

The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354

Zhu, Fangqiang; Hummer, Gerhard

2012-01-01

152

Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

1983-01-01

153

The accuracy of the diagnosis obtained from a nuclear power plant fault-diagnostic advisor using neural networks is addressed in this paper in order to ensure the credibility of the diagnosis. A new error estimation scheme called error estimation by series association provides a measure of the accuracy associated with the advisor's diagnoses. This error estimation is performed by a secondary

Keehoon Kim; Eric B. Bartlett

1996-01-01

154

Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10\\,m\\,s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100\\,\\AA) and entire spectrographs arms ($\\sim$1000--3000\\,\\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200\\,m\\,s$^{-1}$\\,per 1000\\,\\AA. We apply a simple model of these distortions to simulated spectra which characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the ...

Whitmore, J B

2014-01-01

155

Effects of measurement error on horizontal hydraulic gradient estimates.

During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an approximately 500-m(2) site. Additional wells were installed to increase the monitored area to 26,500 m(2), and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of +/-1.3 x 10(-2) m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 x 10(-4)+/-25% with a flow direction of 56 degrees southeast +/-18 degrees, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis. PMID:17257340

Devlin, J F; McElwee, C D

2007-01-01

156

Estimating the coverage of mental health programmes: a systematic review

Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874

De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

2014-01-01

157

Anisotropic discretization and model-error estimation in solid mechanics by local Neumann problems

First, a survey of existing residuum-based error-estimators and error-indicators is given. Generally, residual error estimators (which have at least upper bound in contrast to indicators) can be locally computed from residua of equilibrium and stress-jumps at element interfaces using Dirichlet or Neumann conditions for element patches or individual elements (REM). Another equivalent method for error estimation can be derived from

E. Stein; S. Ohnimus

1999-01-01

158

Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

NASA Technical Reports Server (NTRS)

The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

2013-01-01

159

Hemispheric asymmetry of hippocampal volume is a common finding that has biological relevance, including associations with dementia and cognitive performance. However, a recent study has reported the possibility of systematic error in measurements of hippocampal asymmetry by magnetic resonance volumetry. We manually traced the volumes of the anterior and posterior hippocampus in 40 healthy people to measure systematic error related to image orientation. We found a bias due to the side of the screen on which the hippocampus was viewed, such that hippocampal volume was larger when traced on the left side of the screen than when traced on the right (p?=?0.05). However, this bias was smaller than the anatomical right?>?left asymmetry of the anterior hippocampus. We found right?>?left asymmetry of hippocampal volume regardless of image presentation (radiological versus neurological). We conclude that manual segmentation protocols can minimize the effect of image orientation in the study of hippocampal volume asymmetry, but our confirmation that such bias exists suggests strategies to avoid it in future studies. PMID:23248580

Rogers, Baxter P.; Sheffield, Julia M.; Luksik, Andrew S.; Heckers, Stephan

2012-01-01

160

Interventions to reduce medication errors in adult intensive care: a systematic review.

Critically ill patients need life saving treatments and are often exposed to medications requiring careful titration. The aim of this paper was to review systematically the research literature on the efficacy of interventions in reducing medication errors in intensive care. A search was conducted of PubMed, CINAHL EMBASE, Journals@Ovid, International Pharmaceutical Abstract Series via Ovid, ScienceDirect, Scopus, Web of Science, PsycInfo and The Cochrane Collaboration from inception to October 2011. Research studies involving delivery of an intervention in intensive care for adult patients with the aim of reducing medication errors were examined. Eight types of interventions were identified: computerized physician order entry (CPOE), changes in work schedules (CWS), intravenous systems (IS), modes of education (ME), medication reconciliation (MR), pharmacist involvement (PI), protocols and guidelines (PG) and support systems for clinical decision making (SSCD). Sixteen out of the 24 studies showed reduced medication error rates. Four intervention types demonstrated reduced medication errors post-intervention: CWS, ME, MR and PG. It is not possible to promote any interventions as positive models for reducing medication errors. Insufficient research was undertaken with any particular type of intervention, and there were concerns regarding the level of evidence and quality of research. Most studies involved single arm, before and after designs without a comparative control group. Future researchers should address gaps identified in single faceted interventions and gather data on multi-faceted interventions using high quality research designs. The findings demonstrate implications for policy makers and clinicians in adopting resource intensive processes and technologies, which offer little evidence to support their efficacy. PMID:22348303

Manias, Elizabeth; Williams, Allison; Liew, Danny

2012-09-01

161

Effects of errors-in-variables on weighted least squares estimation

NASA Astrophysics Data System (ADS)

Although total least squares (TLS) is more rigorous than the weighted least squares (LS) method to estimate the parameters in an errors-in-variables (EIV) model, it is computationally much more complicated than the weighted LS method. For some EIV problems, the TLS and weighted LS methods have been shown to produce practically negligible differences in the estimated parameters. To understand under what conditions we can safely use the usual weighted LS method, we systematically investigate the effects of the random errors of the design matrix on weighted LS adjustment. We derive the effects of EIV on the estimated quantities of geodetic interest, in particular, the model parameters, the variance-covariance matrix of the estimated parameters and the variance of unit weight. By simplifying our bias formulae, we can readily show that the corresponding statistical results obtained by Hodges and Moore (Appl Stat 21:185-195, 1972) and Davies and Hutton (Biometrika 62:383-391, 1975) are actually the special cases of our study. The theoretical analysis of bias has shown that the effect of random matrix on adjustment depends on the design matrix itself, the variance-covariance matrix of its elements and the model parameters. Using the derived formulae of bias, we can remove the effect of the random matrix from the weighted LS estimate and accordingly obtain the bias-corrected weighted LS estimate for the EIV model. We derive the bias of the weighted LS estimate of the variance of unit weight. The random errors of the design matrix can significantly affect the weighted LS estimate of the variance of unit weight. The theoretical analysis successfully explains all the anomalously large estimates of the variance of unit weight reported in the geodetic literature. We propose bias-corrected estimates for the variance of unit weight. Finally, we analyze two examples of coordinate transformation and climate change, which have shown that the bias-corrected weighted LS method can perform numerically as well as the weighted TLS method.

Xu, Peiliang; Liu, Jingnan; Zeng, Wenxian; Shen, Yunzhong

2014-07-01

162

Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

ERIC Educational Resources Information Center

When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

2014-01-01

163

Error propagation in two-sensor three-dimensional position estimation

The accuracy of 3D position estimation using two angles-only sensors (such as passive optical imagers) is investigated. Beginning with the basic multisensor triangulation equations used to estimate a 3D target position, error propagation equations are derived by taking the appropriate partial derivatives with respect to various measurement errors. Next the concept of Gaussian measurement error is introduced and used to

John N. Sanders-Reed

2001-01-01

164

Cross-Validation and the Bootstrap: Estimating the Error Rate of a Prediction Rule

A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought

Bradley Efron; Robert Tibshirani

1995-01-01

165

NASA Astrophysics Data System (ADS)

Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.

Hacker, Joshua; Lee, Jared; Lei, Lili

2014-05-01

166

TWO A POSTERIORI ERROR ESTIMATES FOR ONE-DIMENSIONAL SCALAR CONSERVATION LAWS

AND CHARALAMBOS MAKRIDAKIS SIAM J. NUMER. ANAL. c 2000 Society for Industrial and Applied Mathematics Vol. 38, No to an additional error term in the error estimate of Theorem 4. It is interesting to note the nice behavior

Tadmor, Eitan

167

Sensitivity Analysis of k-Fold Cross Validation in Prediction Error Estimation

In the machine learning field, the performance of a classifier is usually measured in terms of prediction error. In most real-world problems, the error cannot be exactly calculated and it must be estimated. Therefore, it is important to choose an appropriate estimator of the error. This paper analyzes the statistical properties, bias and variance, of the k-fold cross-validation classification error

Juan Diego Rodríguez; Aritz Pérez Martínez; José Antonio Lozano

2010-01-01

168

RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION

The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

1998-06-22

169

NASA Astrophysics Data System (ADS)

Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, f?8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M?. We find that the systematic error of f?8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.

Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

2014-10-01

170

Field evaluation of distance-estimation error during wetland-dependent bird surveys

Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x?error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x?error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

Nadeau, Christopher P.; Conway, Courtney J.

2012-01-01

171

Systematic errors for matched filtering of gravitational waves from inspiraling compact binaries

NASA Astrophysics Data System (ADS)

We computed a thorough set of ambiguity functions with templates corresponding to various post-Newtonian approximations of the real signal. We study the detection of gravitational waves emitted by the inspiraling phase of compact binaries in order to study systematically the induced bias in the parameters. The noise spectrum is taken from the VIRGO interferometer, which has an effective frequency range larger than the one predicted for LIGO. We first confirm the results of previous authors that the Newtonian filter has a very low capability of detection and that the 2 PN restricted wave form is good enough for detection in the case of neutron stars binaries. Moreover, we also show that constant spins aligned with the orbital momentum have no significant effect in on-line selection. We point out that the maximization may lead to unphysical values of parameters to compensate the systematic errors due to imperfectly modeled templates, so that one should use a wider range of variation of the mass ratio parameter ?= reduced mass over the total mass (not restricted to 0<=?<=1/4). We also demonstrate that the higher harmonics at one and three times the orbital frequency cannot always be neglected for detection. The loss of signal to noise ratio amounts to 6% with 1.4 and 10 solar masses binary in certain cases.

Canitrot, Philippe

2001-04-01

172

Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

Aver, Erik [School of Physics and Astronomy, University of Minnesota, 116 Church St. SE, Minneapolis, MN 55455 (United States); Olive, Keith A. [William I. Fine Theoretical Physics Institute, University of Minnesota, 116 Church St. SE, Minneapolis, MN 55455 (United States); Skillman, Evan D., E-mail: aver@physics.umn.edu, E-mail: olive@umn.edu, E-mail: skillman@astro.umn.edu [Astronomy Department, University of Minnesota, 116 Church St. SE, Minneapolis, MN 55455 (United States)

2011-03-01

173

Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10 m s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100 \\AA) and entire spectrographs arms ($\\sim$1000--3000 \\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200 m s$^{-1}$ per 1000 \\AA. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, $\\alpha$. The spurious deviations in $\\alpha$ produced by the model closely match important aspects of the VLT--UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in $\\alpha$ from quasar absorption lines.

J. B. Whitmore; M. T. Murphy

2014-09-15

174

Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

NASA Astrophysics Data System (ADS)

Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

Bonin, J. A.; Chambers, D. P.

2012-12-01

175

NASA Astrophysics Data System (ADS)

The problem of estimating parameters and their uncertainty from experimental measurements in marine ecosystems is a common task and often necessitates solving nonlinear equations. If the measurements are subject to individually varying errors (i.e., heteroscedastic data), the parameters are often estimated using a Weighted Least Squares (WLS) method. For estimating the parameter uncertainties, a linearized expression for the covariance matrix exists. Yet, both methods assume that the errors on the independent variable, also called "input", is negligible, which is often not true. For instance, in order to determine uptake and regeneration rates of silicic acid by phytoplankton, concentration and isotopic abundance measurements are performed at the beginning (input) and at the end (output) of an incubation experiment. Here, the so-called input and output are measurements of the same quantities, i.e., determined in exactly the same way, only differing by the time at which the measurements were performed. Clearly, there is no reason to assume that the input measurements are subject to less error than the output measurements. We propose a refinement of the two abovementioned estimation methods which enlarges their applicability to cases where input noise is not negligible. The refined methods are evaluated on the uptake and regeneration processes of silicic acid and compared to the original procedures using Monte-Carlo simulations. The results reveal a smaller bias for the refined WLS estimator compared with the original one. An additional advantage of using the refined WLS cost function is that its residual value can be interpreted as a sample from a ?2 distribution. This property is especially useful because it enables an internal quality control of the results. In addition, the parameter uncertainty estimation is significantly improved. By neglecting the effect of the input noise, a (potentially) important origin of the parameter variation is simply ignored. Therefore, without the refinement, the parameter uncertainties are systematically underestimated. Using the refined method, this systematic error disappears and on the whole, the parameter standard deviations are accurately estimated.

de Brauwere, Anouk; De Ridder, Fjo; Elskens, Marc; Schoukens, Johan; Pintelon, Rik; Baeyens, Willy

2005-04-01

176

The purpose of this work was the development of a probabilistic planning method with biological cost functions that does not require the definition of margins. Geometrical uncertainties were integrated in tumor control probability (TCP) and normal tissue complication probability (NTCP) objective functions for inverse planning. For efficiency reasons random errors were included by blurring the dose distribution and systematic errors by shifting structures with respect to the dose. Treatment plans were made for 19 prostate patients following four inverse strategies: Conformal with homogeneous dose to the planning target volume (PTV), a simultaneous integrated boost using a second PTV, optimization using TCP and NTCP functions together with a PTV, and probabilistic TCP and NTCP optimization for the clinical target volume without PTV. The resulting plans were evaluated by independent Monte Carlo simulation of many possible treatment histories including geometrical uncertainties. The results showed that the probabilistic optimization technique reduced the rectal wall volume receiving high dose, while at the same time increasing the dose to the clinical target volume. Without sacrificing the expected local control rate, the expected rectum toxicity could be reduced by 50% relative to the boost technique. The improvement over the conformal technique was larger yet. The margin based biological technique led to toxicity in between the boost and probabilistic techniques, but its control rates were very variable and relatively low. During evaluations, the sensitivity of the local control probability to variations in biological parameters appeared similar for all four strategies. The sensitivity to variations of the geometrical error distributions was strongest for the probabilistic technique. It is concluded that probabilistic optimization based on tumor control probability and normal tissue complication probability is feasible. It results in robust prostate treatment plans with an improved balance between local control and rectum toxicity, compared to conventional techniques.

Witte, Marnix G.; Geer, Joris van der; Schneider, Christoph; Lebesque, Joos V.; Alber, Markus; Herk, Marcel van [Department of Radiation Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Amsterdam (Netherlands); Sektion fuer Biomedizinische Physik, Universitaetsklinik fuer Radioonkologie, Universitaet Tuebingen (Germany); Department of Radiation Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Amsterdam (Netherlands)

2007-09-15

177

NASA Technical Reports Server (NTRS)

This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

2002-01-01

178

Autonomous error bounding of position estimates from GPS and Galileo

In safety-of-life applications of satellite-based navigation, such as the guided approach and landing of an aircraft, the most important question is whether the navigation error is tolerable. Although differentially corrected ...

Temple, Thomas J. (Thomas John)

2006-01-01

179

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2013-01-01

180

Estimation of frequencies in presence of heavy tail errors

In this paper, we consider the problem of estimating the sinusoidal frequencies in presence of additive white noise. The additive white noise has mean zero but it may not have finite variance. We propose to use the least-squares estimators or the approximate least-squares estimators to estimate the unknown parameters. It is observed that the least-squares estimators and the approximate least-squares

Swagata Nandi; Srikanth K. Iyer; Debasis Kundu

2002-01-01

181

NASA Technical Reports Server (NTRS)

One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.

Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

2004-01-01

182

Multivariate Error Covariance Estimates by Monte Carlo Simulation for Assimilation Studies and Assimilation Office, NASA Goddard Space Flight Center, Greenbelt, Maryland CHRISTIAN L. KEPPENNE SAIC of ocean observations limits our ability to estimate the covariance structures from model

Johnson, Gregory C.

183

Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error

NASA Astrophysics Data System (ADS)

Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).

Tang, H.; Dubayah, R.

2010-12-01

184

Mesoscale predictability and background error convariance estimation through ensemble forecasting

The "Fake-Dry" Experiment. 3. 4 Summary. 25 25 46 59 IV BACKGROUND ERROR COVARIANCE 64 4. 1 Introduction. 4. 2 Cross Covariance 4. 3 Correlation. . 64 66 76 vn CHAPTER Page 4. 4 Spatial Covariance 4. 5 Cross-Spatial Covariance 4. 6 Summary... The "Fake-Dry" Experiment. 3. 4 Summary. 25 25 46 59 IV BACKGROUND ERROR COVARIANCE 64 4. 1 Introduction. 4. 2 Cross Covariance 4. 3 Correlation. . 64 66 76 vn CHAPTER Page 4. 4 Spatial Covariance 4. 5 Cross-Spatial Covariance 4. 6 Summary...

Ham, Joy L

2012-06-07

185

Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications

normal estimator with significant efficiency gains over current methods. Lastly, we apply semiparametric theory to mixture data models common in kin-cohort designs of Huntington's disease where interest lies in comparing the estimated age...

Garcia, Tanya

2012-10-19

186

We review the general nature of human error(s) in complex systems and then focus on issues raised by Institute of Medicine report in 1999. From this background we classify and categorize error(s) in medical practice, including medication, procedures, diagnosis, and clerical error(s). We also review the potential role of software and technology applications in reducing the rate and nature of

D. Kopec; M. H. Kabir; D. Reinharth; O. Rothschild; J. A. Castiglione

2003-01-01

187

Local and global error estimators and an associated h-based adaptive mesh refinement schemes are proposed for coupled thermal-stress problems. The error estimators are based on the “flux smoothing” technique of Zienkiewicz and Zhu with important modifications to improve convergence performance and computational efficiency. Adaptive mesh refinement is based on the concept of adaptive accuracy criteria, previously presented by the authors

P. Katragadda; I. R. Grosset

1996-01-01

188

Macroscale water fluxes 1. Quantifying errors in the estimation of basin mean precipitation

) about the precipitation process. Adjustments of precipitation for gauge bias and estimatesMacroscale water fluxes 1. Quantifying errors in the estimation of basin mean precipitation P. C. D and testing of methods for quantifying several errors in basin mean precipitation, both in the long-term mean

189

Given a prediction rule based on a set of patients, what is the probability of incorrectly predicting the outcome of a new patient? Call this probability the true error. An optimistic estimate is the apparent error, or the proportion of incorrect predictions on the original set of patients, and it is the goal of this article to study estimates of

Gail Gong

1986-01-01

190

A posteriori error estimation and three-dimensional automatic mesh generation

The paper considers two important aspects of finite element adaptive analysis: (i) a posteriori error estimation; (ii) automatic mesh generation. We first present an introduction of the subject of accurate and robust a posteriori error estimation. The problem of 3-D automatic mesh generation is then discussed in some detail.

J. Z. Zhu; O. C. Zienkiewicz

1997-01-01

191

Goal-oriented error estimation based on equilibrated-flux reconstruction for finite element

Goal-oriented error estimation based on equilibrated-flux reconstruction for finite elementC 3A7, Canada Abstract We propose an approach for goal-oriented error estimation in finite element in the cases where a conforming finite element method, a dG method, or a mixed Raviart- Thomas method are used

Paris-Sud XI, UniversitÃ© de

192

Mean-square-error bounds for reduced-order linear state estimators

NASA Technical Reports Server (NTRS)

The mean-square error of reduced-order linear state estimators for continuous-time linear systems is investigated. Lower and upper bounds on the minimal mean-square error are presented. The bounds are readily computable at each time-point and at steady state from the solutions to the Ricatti and the Liapunov equations. The usefulness of the error bounds for the analysis and design of reduced-order estimators is illustrated by a practical numerical example.

Baram, Y.; Kalit, G.

1987-01-01

193

An estimate of asthma prevalence in Africa: a systematic analysis

Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge. PMID:24382846

Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

2013-01-01

194

Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches

In corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms or across time, and OLS standard errors can be biased. Historically, researchers in the two literatures have used different solutions to this problem. This paper examines the different methods used in the literature and

Mitchell A. Petersen

2009-01-01

195

Error Estimating Codes for Insertion and Deletion Jiwei Huang

probability than bit flipping errors. Our idEEC design can build upon any existing EEC scheme. The basic idea or classroom use is granted without fee provided that copies are not made or distributed for profit in the packet flipped during the transmission (i.e., the BER). Such codes are in general stronger ï¿½ and a bit

Lall, Ashwin

196

Identifying the source parameters from a gravitational-wave measurement alone is limited by our ability to discriminate signals from different sources and the accuracy of the waveform family employed in the search. Here we address both issues in the framework of an adapted coordinate system that allows for linear Fisher-matrix type calculations of waveform differences that are both accurate and computationally very efficient. We investigate statistical errors by using principal component analysis of the post-Newtonian (PN) expansion coefficients, which is well conditioned despite the Fisher matrix becoming ill conditioned for larger numbers of parameters. We identify which combinations of physical parameters are most effectively measured by gravitational-wave detectors for systems of neutron stars and black holes with aligned spin. We confirm the expectation that the dominant parameter of the inspiral waveform is the chirp mass. The next dominant parameter depends on a combination of the spin and the symmetric mass ratio. In addition, we can study the systematic effect of various spin contributions to the PN phasing within the same parametrization, showing that the inclusion of spin-orbit corrections up to next-to-leading order, but not necessarily of spin-spin contributions, is crucial for an accurate inspiral waveform model. This understanding of the waveform structure throughout the parameter space is important to set up an efficient search strategy and correctly interpret future gravitational-wave observations.

Frank Ohme; Alex B. Nielsen; Drew Keppel; Andrew Lundgren

2013-04-25

197

The development of reliable multivariate calibration models for spectroscopic instruments in on-line/in-line monitoring of chemical and bio-chemical processes is generally difficult, time-consuming and costly. Therefore, it is preferable if calibration models can be used for an extended period, without the need to replace them. However, in many process applications, changes in the instrumental response (e.g. owing to a change of spectrometer) or variations in the measurement conditions (e.g. a change in temperature) can cause a multivariate calibration model to become invalid. In this contribution, a new method, systematic prediction error correction (SPEC), has been developed to maintain the predictive abilities of multivariate calibration models when e.g. the spectrometer or measurement conditions are altered. The performance of the method has been tested on two NIR data sets (one with changes in instrumental responses, the other with variations in experimental conditions) and the outcomes compared with those of some popular methods, i.e. global PLS, univariate slope and bias correction (SBC) and piecewise direct standardization (PDS). The results show that SPEC achieves satisfactory analyte predictions with significantly lower RMSEP values than global PLS and SBC for both data sets, even when only a few standardization samples are used. Furthermore, SPEC is simple to implement and requires less information than PDS, which offers advantages for applications with limited data. PMID:20944851

Chen, Zeng-Ping; Li, Li-Mei; Yu, Ru-Qin; Littlejohn, David; Nordon, Alison; Morris, Julian; Dann, Alison S; Jeffkins, Paul A; Richardson, Mark D; Stimpson, Sarah L

2011-01-01

198

The objective of this systematic review is to analyse the relative risk reduction on medication error and adverse drug events (ADE) by computerized physician order entry systems (CPOE). We included controlled field studies and pretest-posttest studies, evaluating all types of CPOE systems, drugs and clinical settings. We present the results in evidence tables, calculate the risk ratio with 95% confidence

Elske Ammenwerth; Petra Schnell-Inderst; Christof Machan; Uwe Siebert

2008-01-01

199

Goal-oriented explicit residual-type error estimates in XFEM

NASA Astrophysics Data System (ADS)

A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

2013-08-01

200

HDOS ERROR MESSAGES Part 1. ... Not Capable of This Operation 006 Illegal Format for Device Name 007 Illegal Format for File Name 008 ... 133 Illegal Usage 134 Data Lock Engaged 135 Cant Find Variable Mentioned in NEXT Statement ...

201

Genetic algorithms based robust frequency estimation of sinusoidal signals with stationary errors

Genetic algorithms L1-norm estimator Least median estimator Least square estimator Least trimmed estimator Multiple sinusoidal model Outlier-insensitive criterion abstract In this paper, we consider the fundamental problem of frequency estimation of multiple sinusoidal signals with stationary errors. We propose genetic algorithm and outlier-insensitive criterion function based technique for the frequency estimation problem. In the simulation studies and real life data

Amit Mitra; Debasis Kundu

2010-01-01

202

Estimating satellite salinity errors for assimilation of Aquarius and SMOS data into climate models

NASA Astrophysics Data System (ADS)

dynamical systems with new information from ocean measurements, including observations of sea surface salinity (SSS) from Aquarius and SMOS, requires careful consideration of data errors that are used to determine the importance of constraints in the optimization. Here such errors are derived by comparing satellite SSS observations from Aquarius and SMOS with ocean model output and in situ data. The associated data error variance maps have a complex spatial pattern, ranging from less than 0.05 in the open ocean to 1-2 (units of salinity variance) along the coasts and high latitude regions. Comparing the data-model misfits to the data errors indicates that the Aquarius and SMOS constraints could potentially affect estimated SSS values in several ocean regions, including most tropical latitudes. In reference to the Aquarius error budget, derived errors are less than the total allocation errors for the Aquarius mission accuracy requirements in low and midlatitudes, but exceed allocation errors in high latitudes.

Vinogradova, Nadya T.; Ponte, Rui M.; Fukumori, Ichiro; Wang, Ou

2014-08-01

203

Space-Time Error Representation and Estimation in Navier-Stokes Calculations

NASA Technical Reports Server (NTRS)

The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

Barth, Timothy J.

2006-01-01

204

Multiclass Bayes error estimation by a feature space sampling technique

NASA Technical Reports Server (NTRS)

A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

Mobasseri, B. G.; Mcgillem, C. D.

1979-01-01

205

A posteriori error estimates for the Johnson-N?d?lec FEM-BEM coupling

Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h?h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches. PMID:22347772

Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

2012-01-01

206

We develop a general method to "self-calibrate" observations of galaxy clustering with respect to systematics associated with photometric calibration errors. We first point out the danger posed by the multiplicative effect of calibration errors, where large-angle error propagates to small scales and may be significant even if the large-scale information is cleaned or not used in the cosmological analysis. We then propose a method to measure the arbitrary large-scale calibration errors and use these measurements to correct the small-scale (high-multipole) power which is most useful for constraining the majority of cosmological parameters. We demonstrate the effectiveness of our approach on synthetic examples and briefly discuss how it may be applied to real data.

Shafer, Daniel L

2014-01-01

207

Error Estimates in Horocycle Averages Asymptotics: Challenges from String Theory

There is an intriguing connection between the dynamics of the horocycle flow in the modular surface $SL_{2}(\\pmb{Z}) \\backslash SL_{2}(\\pmb{R})$ and the Riemann hypothesis. It appears in the error term for the asymptotic of the horocycle average of a modular function of rapid decay. We study whether similar results occur for a broader class of modular functions, including functions of polynomial growth, and of exponential growth at the cusp. Hints on their long horocycle average are derived by translating the horocycle flow dynamical problem in string theory language. Results are then proved by designing an unfolding trick involving a Theta series, related to the spectral Eisenstein series by Mellin integral transform. We discuss how the string theory point of view leads to an interesting open question, regarding the behavior of long horocycle averages of a certain class of automorphic forms of exponential growth at the cusp.

Matteo A. Cardella

2010-12-13

208

On-line estimation of error covariance parameters for atmospheric data assimilation

NASA Technical Reports Server (NTRS)

A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

Dee, Dick P.

1995-01-01

209

NASA Technical Reports Server (NTRS)

We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.

Barth, Timothy J.; Larson, Mats G.

2000-01-01

210

Error propagation and scaling for tropical forest biomass estimates

The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is

Jerome Chave; Richard Condit; Salomon Aguilar; Andres Hernandez; Suzanne Lao; Rolando Perez

2004-01-01

211

Model Selection and Error Estimation Peter L. Bartlett y

vc dimension, empirical vc entropy, and margin-based quantities. We also consider the maxi- mal di#11 regularization, structural risk minimization, data-dependant penalties, maximal discrepancy #3; A shorter version maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo in

Bartlett, Peter L.

212

The Laser Atmospheric Wind Sounder (LAWS) Preliminary Error Budget and Performance Estimate

NASA Technical Reports Server (NTRS)

The Laser Atmospheric Wind Sounder (LAWS) study phase has resulted in a preliminary error budget and an estimate of the instrument performance. This paper will present the line-of-sight (LOS) Velocity Measurement Error Budget, the instrument Boresight Error Budget, and the predicted signal-to-noise ratio (SNR) performance. The measurement requirements and a preliminary design for the LAWS instrument are presented in a companion paper.

Kenyon, David L.; Anderson, Kent

1992-01-01

213

A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

NASA Technical Reports Server (NTRS)

This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

Larson, Mats G.; Barth, Timothy J.

1999-01-01

214

about software review and estimation of a cottontail rabbit population. Some key words: CaptureÂrecapture; Closed population estimation; Gibbs sampling; SoftÂ ware review; Variable capture probability #12; BAYESIAN CAPTUREÂRECAPTURE METHODS FOR ERROR DETECTION AND ESTIMATION OF POPULATION SIZE

Basu, Sanjib

215

Real-time bounded-error pose estimation for road vehicles using vision

This paper is about online, constant-time pose estimation for road vehicles. We exploit both the state of the art in vision based SLAM and the wide availability of overhead imagery of road networks. We show that by formulating the pose estimation problem in a relative sense, we can estimate the vehicle pose in real-time and bound its absolute error by

Ashley Napier; Gabe Sibley; Paul Newman

2010-01-01

216

A method, based on the bootstrap procedure, is proposed for the estimation of branch-length errors and confidence intervals in a phylogenetic tree for which equal rates of substitution among lineages do not necessarily hold. The method can be used to test whether an estimated internodal distance is significantly greater than zero. In the application of the method, any estimator of

Joaquin Dopazo

1994-01-01

217

We describe a bootstrap method for estimating mean squared error and smoothing parameter in nonparametric problems. The method involves using a resample of smaller size than the original sample. There are many applications, which are illustrated using the special cases of nonparametric density estimation, nonparametric regression, and tail parameter estimation.

Peter Hall

1990-01-01

218

Round-Robin Analysis of Social Interaction: Exact and Estimated Standard Errors.

ERIC Educational Resources Information Center

The Social Relations model of D. A. Kenny estimates variances and covariances from a round-robin of two-person interactions. This paper presents a matrix formulation of the Social Relations model, using the formulation to derive exact and estimated standard errors for round-robin estimates of Social Relations parameters. (SLD)

Bond, Charles F., Jr.; Lashley, Brian R.

1996-01-01

219

SUMMARY In this paper, we first present a consistent procedure to establish influence functions for the finite element analysis of shell structures, where the influence function can be for any linear quantity of engineering interest. We then design some goal-oriented error measures that take into account the cancellation effect of errors over the domain to overcome the issue of over-estimation.

Thomas Grätsch; Klaus-Jürgen Bathe

2005-01-01

220

ERROR ESTIMATES FOR FINITE DIFFERENCE METHODS FOR A WIDE-ANGLE `PARABOLIC' EQUATION

ERROR ESTIMATES FOR FINITE DIFFERENCE METHODS FOR A WIDE-ANGLE `PARABOLIC' EQUATION G. D. AKRIVIS. wide-angle `parabolic' equation, Underwater Acoustics, finite difference error esti- mates, interface of propagation that may interact strongly with the bottom layers. The most widely known such equation

Akrivis, Georgios

221

Errors and parameter estimation in precipitation-runoff modeling 2. Case study.

A case study is presented which illustrates some of the error analysis, sensitivity analysis, and parameter estimation procedures reviewed in the first part of this paper. It is shown that those procedures, most of which come from statistical nonlinear regression theory, are invaluable in interpreting errors in precipitation-runoff modeling and in identifying appropriate calibration strategies. -Author

Troutman, B. M.

1985-01-01

222

Fast motion estimation based on adaptive search range adjustment and matching error prediction

This work presents fast motion estimation (ME) by using both an adaptive search range adjustment and a matching error prediction. The basic idea of the proposed scheme is based on adjusting a given search range adaptively and predicting a block matching errors effectively. The adaptive search range adjustment is first performed by analyzing the contents of a scene. Next, the

Sangkeun Lee

2009-01-01

223

Sensitivity of freezing time estimation methods to heat transfer coefficient error

Numerous semi-analytical\\/empirical methods for predicting food freezing times have been proposed, all of which require knowledge of the surface heat transfer coefficient. The empirical nature of the surface heat transfer coefficient can introduce significant error in the freezing time calculation. Therefore, a sensitivity analysis was performed on various freezing time estimation methods to determine the impact of errors in the

Brian A. Fricke; Bryan R. Becker

2006-01-01

224

Quantifying the error in estimated transfer functions with application to model order selection

Previous results on estimating errors or error bounds on identified transfer functions have relied on prior assumptions about the noise and the unmodeled dynamics. This prior information took the form of parameterized bounding functions or parameterized probability density functions, in the time or frequency domain with known parameters. It is shown that the parameters that quantify this prior information can

Graham C. Goodwin; Michel Gevers; Brett Ninness

1992-01-01

225

Estimation of the Mutation Rate during Error-prone Polymerase Chain Reaction

Estimation of the Mutation Rate during Error-prone Polymerase Chain Reaction Dai Wang1 , Cheng-prone polymerase chain reaction (PCR) is widely used to introduce point mutations during in vitro evolution step of in vitro evolution is mutagenesis. Error-prone polymerase chain reaction (PCR) (Leung et al

Sun, Fengzhu - Sun, Fengzhu

226

NASA Technical Reports Server (NTRS)

Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

Harwit, M.

1977-01-01

227

Refined error estimates for matrix-valued radial basis functions

Estimates for Matrix-valued Radial Basis Functions. (May 2006) Edward J. Fuselier, Jr., B.S., Southeastern Louisiana University Co{Chairs of Advisory Committee: Dr. Francis Narcowich Dr. Joe Ward Radial basis functions (RBFs) are probably best known... from a The journal model is Advances in Computational Mathematics. 2 very large class of continuous linear functionals. In particular, they can interpolate derivative and integral data at any point, and therefore can be used to solve partial di erential...

Fuselier, Edward J., Jr.

2007-09-17

228

Estimation of measuring error in digital dc fluxmeters

In high-permeability materials, the voltage waveforms induced in a search coil are pulse-like. To determine the sampling conditions for a digital dc fluxmeter, the accuracy of digital integration in such cases has been examined. The accuracy was estimated by means of computer simulations using the following pulse-like waveforms: they are rectangular, triangular and half-sinusoidal ones which were idealized actual voltage

Takaaki Yamamoto; Shunji Takada; Tadashi Sasaki

1994-01-01

229

Fading MIMO Relay Channels with Channel Estimation Error Bengi Aygun Alkan Soysal

Fading MIMO Relay Channels with Channel Estimation Error Bengi AygÂ¨un Alkan Soysal Department.aygun@bahcesehir.edu.tr alkan.soysal@bahcesehir.edu.tr Abstract--In this paper, we consider a full-duplex, decode- and

Soysal, Alkan

230

Lucy, D.; Pollard, A.M. Title: Further comments on the estimation of error

with the gustafson dental age estimation method Journal: Journal of Forensic Sciences Date: 1995 Volume: 40(2) Pages address. Abstract: Many researchers in the field of forensic odontology have questioned the error

Lucy, David

231

A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...

Locatelli, R.

232

Robust Standard Errors in Transformed Likelihood Estimation of Dynamic Panel Models

This paper extends the transformed maximum likelihood approach for estimation of dynamic panel data models by Hsiao, Pesaran, and Tahmiscioglu (2002) to the case where the errors are crosssectionally heteroskedastic. This extension is not trivial...

Hayakawa, Kazuhiko; Pesaran, M. Hashem

2012-05-09

233

Spatio-temporal Error on the Discharge Estimates for the SWOT Mission

NASA Astrophysics Data System (ADS)

The Surface Water and Ocean Topography (SWOT) mission measures two key quantities over rivers: water surface elevation and slope. Water surface elevation from SWOT will have a vertical accuracy, when averaged over approximately one square kilometer, on the order of centimeters. Over reaches from 1-10 km long, SWOT slope measurements will be accurate to microradians. Elevation (depth) and slope offer the potential to produce discharge as a derived quantity. Estimates of instantaneous and temporally integrated discharge from SWOT data will also contain a certain degree of error. Two primary sources of measurement error exist. The first is the temporal sub-sampling of water elevations. For example, SWOT will sample some locations twice in the 21-day repeat cycle. If these two overpasses occurred during flood stage, an estimate of monthly discharge based on these observations would be much higher than the true value. Likewise, if estimating maximum or minimum monthly discharge, in some cases, SWOT may miss those events completely. The second source of measurement error results from the instrument's capability to accurately measure the magnitude of the water surface elevation. How this error affects discharge estimates depends on errors in the model used to derive discharge from water surface elevation. We present a global distribution of estimated relative errors in mean annual discharge based on a power law relationship between stage and discharge. Additionally, relative errors in integrated and average instantaneous monthly discharge associated with temporal sub-sampling over the proposed orbital tracks are presented for several river basins.

Biancamaria, S.; Alsdorf, D. E.; Andreadis, K. M.; Clark, E.; Durand, M.; Lettenmaier, D. P.; Mognard, N. M.; Oudin, Y.; Rodriguez, E.

2008-12-01

234

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

235

The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors

Marcel van Tuinen; Elizabeth A. Hadly

2004-01-01

236

Estimated generalized least squares in spatially misaligned regression models with Berkson error.

In environmental studies, relationships among variables that are misaligned in space are routinely assessed. Because the data are misaligned, kriging is often used to predict the covariate at the locations where the response is observed. Using kriging predictions to estimate regression parameters in linear regression models introduces a Berkson error, which induces a covariance structure that is challenging to estimate. In addition, if the parameters associated with kriging (e.g. trend surface parameters and spatial covariance parameters) are estimated, then an additional uncertainty is introduced. We characterize the total measurement error as part of a broader class of Berkson error models and develop an estimated generalized least squares estimator using estimated covariance parameters. In working with the induced model, we fully account for the error structure and estimate the covariance parameters using likelihood-based methods. We provide insight into when it is important to fully account for the covariance structure induced from the different error sources. We assess the performance of the estimators using simulation and illustrate the methodology using publicly available data from the US Environmental Protection Agency. PMID:23568241

Lopiano, Kenneth K; Young, Linda J; Gotway, Carol A

2013-09-01

237

We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

2009-01-01

238

We have constructed a multiple-bounce reflectometer similar to the one designed by Kelsall. Systematic errors present the reflectivities measured on our multiple-bounce reflectometer have been significantly reduced for high-reflectivity mirrors. This reduction came about through careful study of all components in our system. The diffraction effects of the apertures in the reflectometer were studied with the aid of a computer program that calculated the radial intensity distribution due to the Fresnel diffraction from each aperture. The systematic errors arising from the attempt to measure a concave spherical mirror were studied and minimized. A second computer program calculated the systematic error introduced by pathlength changes as the number of bounces in the reflectometer increased. The replacement of the sample thermopile detector with a thin-film bolometer along with other equipment changes has improved the absolute accuracy to 0.001 for the measured reflectivities with a demonstrated precision of 0.0003. Previous measurements had been about 0.008 low compared to measurements on the V-W pass reflectometer at China Lake, California. PMID:20125563

Wetzel, M G; Saito, T T; Patterson, S R

1973-07-01

239

For targeted radionuclide therapy, the level of activity to be administered is often determined from whole-body dosimetry performed on a pre-therapy tracer study. The largest potential source of error in this method is due to inconsistent or inaccurate activity retention measurements. The main aim of this study was to develop a simple method to quantify the uncertainty in the absorbed dose due to these inaccuracies. A secondary aim was to assess the effect of error propagation from the results of the tracer study to predictive absorbed dose estimates for the therapy as a result of using different radionuclides for each. Standard error analysis was applied to the MIRD schema for absorbed dose calculations. An equation was derived to describe the uncertainty in the absorbed dose estimate due solely to random errors in activity-time data, requiring only these data as input. Two illustrative examples are given. It is also shown that any errors present in the dosimetry calculations following the tracer study will propagate to errors in predictions made for the therapy study according to the ratio of the respective effective half-lives. If the therapy isotope has a much longer physical half-life than the tracer isotope (as is the case, for example, when using 123I as a tracer for 131I therapy) the propagation of errors can be significant. The equations derived provide a simple means to estimate two potentially large sources of error in whole-body absorbed dose calculations. PMID:12361219

Flux, Glenn D; Guy, Matthew J; Beddows, Ruth; Pryor, Matthew; Flower, Maggie A

2002-09-01

240

NASA Astrophysics Data System (ADS)

For targeted radionuclide therapy, the level of activity to be administered is often determined from whole-body dosimetry performed on a pre-therapy tracer study. The largest potential source of error in this method is due to inconsistent or inaccurate activity retention measurements. The main aim of this study was to develop a simple method to quantify the uncertainty in the absorbed dose due to these inaccuracies. A secondary aim was to assess the effect of error propagation from the results of the tracer study to predictive absorbed dose estimates for the therapy as a result of using different radionuclides for each. Standard error analysis was applied to the MIRD schema for absorbed dose calculations. An equation was derived to describe the uncertainty in the absorbed dose estimate due solely to random errors in activity-time data, requiring only these data as input. Two illustrative examples are given. It is also shown that any errors present in the dosimetry calculations following the tracer study will propagate to errors in predictions made for the therapy study according to the ratio of the respective effective half-lives. If the therapy isotope has a much longer physical half-life than the tracer isotope (as is the case, for example, when using 123I as a tracer for 131I therapy) the propagation of errors can be significant. The equations derived provide a simple means to estimate two potentially large sources of error in whole-body absorbed dose calculations.

Flux, Glenn D.; Guy, Matthew J.; Beddows, Ruth; Pryor, Matthew; Flower, Maggie A.

2002-09-01

241

We treat decision-directed channel tracking (DDCT) in mobile orthogonal frequency-division multiplexing (OFDM) systems as an outlier contaminated Gaussian regression problem, where the source of outliers are the incorrect symbol decisions. Existing decision-directed estimators such as the expectation-maximization (EM)-based estimators and the 2-D-MMSE estimator do not appropriately downweight incorrect\\/poor decisions while defining the channel estimator, and hence suffer from error propagation

Sheetal Kalyani; Krishnamurthy Giridhar

2007-01-01

242

A function space approach to state and model error estimation for elliptic systems

NASA Technical Reports Server (NTRS)

An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal estimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed.

Rodriguez, G.

1983-01-01

243

We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.

Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B. [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Am Muehlenberg 1, D-14476 Golm (Germany); Ajith, P. [LIGO Laboratory, California Institute of Technology, Pasadena, California 91125 (United States); Theoretical Astrophysics, California Institute of Technology, Pasadena, California 91125 (United States); Bruegmann, B. [Theoretisch-Physikalisches Institut, Friedrich Schiller Universitaet Jena, Max-Wien-Platz 1, 07743 Jena (Germany); Hannam, M. [Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Vienna (Austria); Husa, S.; Pollney, D. [Departament de Fisica, Universitat de les Illes Balears, Carretera Valldemossa km 7.5, E-07122 Palma (Spain); Reisswig, C. [Theoretical Astrophysics, California Institute of Technology, Pasadena, California 91125 (United States); Seiler, J. [NASA Goddard Space Flight Center, Greenbelt, Maryland 20771 (United States)

2010-09-15

244

Solving large tomographic linear systems: size reduction and error estimation

NASA Astrophysics Data System (ADS)

We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.

Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust

2014-10-01

245

A function space approach to state and model error estimation for elliptic systems

NASA Technical Reports Server (NTRS)

An approach is advanced for the concurrent estimation of the state and of the model errors of a system described by elliptic equations. The estimates are obtained by a deterministic least-squares approach that seeks to minimize a quadratic functional of the model errors, or equivalently, to find the vector of smallest norm subject to linear constraints in a suitably defined function space. The minimum norm solution can be obtained by solving either a Fredholm integral equation of the second kind for the case with continuously distributed data or a related matrix equation for the problem with discretely located measurements. Solution of either one of these equations is obtained in a batch-processing mode in which all of the data is processed simultaneously or, in certain restricted geometries, in a spatially scanning mode in which the data is processed recursively. After the methods for computation of the optimal esimates are developed, an analysis of the second-order statistics of the estimates and of the corresponding estimation error is conducted. Based on this analysis, explicit expressions for the mean-square estimation error associated with both the state and model error estimates are then developed. While this paper focuses on theoretical developments, applications arising in the area of large structure static shape determination are contained in a closely related paper (Rodriguez and Scheid, 1982).

Rodriguez, G.

1983-01-01

246

NASA Astrophysics Data System (ADS)

Visually servoed paired structured light system (ViSP) has been found to be useful in estimating 6-DOF relative displacement. The system is composed of two screens facing each other, each with one or two lasers, a 2-DOF manipulator and a camera. The displacement between two sides is estimated by observing positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive structures, the whole area should be partitioned and each ViSP module is placed in each partition in a cascaded manner. The estimated displacement between adjoining ViSPs is combined with the next partition so that the entire movement of the structure can be estimated. The multiple ViSPs, however, have a major problem that the error is propagated through the partitions. Therefore, a displacement estimation error back-propagation (DEEP) method which uses Newton-Raphson or gradient descent formulation inspired by the error back-propagation algorithm is proposed. In this method, the estimated displacement from the ViSP is updated using the error back-propagated from a fixed position. To validate the performance of the proposed method, various simulations and experiments have been performed. The results show that the proposed method significantly reduces the propagation error throughout the multiple modules.

Jeon, H.; Shin, J. U.; Myung, H.

2013-04-01

247

natural DSDs, study precipitation microphysics, and verify radar rain estimation. However, disdrometer of measurement and model errors on the estimates of DSD moments, and other radar or rain integral parameters. #12 parameters from multi-parameter radar measurements. In this paper, simulated and observed rain DSDs are used

Zhang, Guifu

248

Efficient Small Area Estimation in the Presence of Measurement Error in Covariates

for this purpose. In this dissertation, each project describes a model used for small area estimation in which the covariates are measured with error. We applied different methods of bias correction to improve the estimates of the parameter of interest in the small...

Singh, Trijya

2012-10-19

249

Refined Error Estimates for the Riccati Equation with Applications to the Angular Teukolsky Equation

We derive refined rigorous error estimates for approximate solutions of Sturm-Liouville and Riccati equations with real or complex potentials. The approximate solutions include WKB approximations, Airy and parabolic cylinder functions, and certain Bessel functions. Our estimates are applied to solutions of the angular Teukolsky equation with a complex aspherical parameter in a rotating black hole Kerr geometry.

Felix Finster; Joel Smoller

2013-07-24

250

Error estimation of a neuro-fuzzy predictor for prognostic purpose

Error estimation of a neuro-fuzzy predictor for prognostic purpose Mohamed El-Koujok, Rafael is necessary: it starts from monitoring data and goes through provisional reliability and remaining useful life of the evolving eXtended Tagaki-Sugeno system as a neuro- fuzzy predictor. A method to estimate the probability

Paris-Sud XI, UniversitÃ© de

251

State-space framework for estimating measurement error from double-tagging telemetry experiments

State-space framework for estimating measurement error from double-tagging telemetry experiments telemetry technologies used with free-ranging animals. 2. We developed a state-space modelling framework for estimating the precision of telemetry loca- tion data based on double-tagging experiments. The model

Costa, Daniel P.

252

Haplotypes are an important concept for genetic association studies, but involve uncertainty due to statistical reconstruction from single nucleotide polymorphism (SNP) genotypes and genotype error. We developed a re-sampling approach to quantify haplotype misclassification probabilities and implemented the MC-SIMEX approach to tackle this as a 3 x 3 misclassification problem. Using a previously published approach as a benchmark for comparison, we evaluated the performance of our approach by simulations and exemplified it on real data from 15 SNPs of the APM1 gene. Misclassification due to reconstruction error was small for most, but notable for some, especially rarer haplotypes. Genotype error added misclassification to all haplotypes resulting in a non-negligible drop in sensitivity. In our real data example, the bias of association estimates due to reconstruction error alone reached -48.2% for a 1% genotype error, indicating that haplotype misclassification should not be ignored if high genotype error can be expected. Our 3 x 3 misclassification view of haplotype error adds a novel perspective to currently used methods based on genotype intensities and expected number of haplotype copies. Our findings give a sense of the impact of haplotype error under realistic scenarios and underscore the importance of high-quality genotyping, in which case the bias in haplotype association estimates is negligible. PMID:20649529

Lamina, Claudia; Küchenhoff, Helmut; Chang-Claude, Jenny; Paulweber, Bernhard; Wichmann, H-Erich; Illig, Thomas; Hoehe, Margret R; Kronenberg, Florian; Heid, Iris M

2010-09-01

253

Estimating event rates in the presence of dating error with an application to lunar impacts

NASA Astrophysics Data System (ADS)

Radiometric ages of objects are often used to reconstruct historical variations in the rate function of geological events. Measurement error in such ages can lead to a bias in the estimated rate function. This paper describes a method for estimating the historical rate function that accounts for measurement error. The method is applied to the estimation of the rate of lunar impacts over the past 3.5 billion years from the argon-argon ages of 155 impact spherules. A simulation study of the performance of the method is also presented.

Solow, Andrew R.

2002-05-01

254

NASA Astrophysics Data System (ADS)

Field estimates of plume degradation rates ? [T-1] in aquifers provide a basis for assessing the possible impact of (toxic) organic pollutants on downstream environments; however, difficulties with measurement and methodology mean that estimated site-specific rates potentially involve considerable uncertainty. Here, we specifically show that if mass flow or average concentration measurements are associated with errors of ˜20% (or even less), the errors may in many cases propagate, magnify, and cause order-of-magnitude errors in estimates of ?. We also investigate uncertainties in the integral pumping test method, in which average concentrations are determined based on concentrations measured from pumping wells. In this method, the small-scale variability that may bias the results of point measurements can be favorably averaged out when pumping; however, the novelty of the approach means that questions remain regarding its application. For example, the magnitude of methodological errors, such as those associated with the assumption of constant concentration along the flow direction within the extent of the well capture zone, remain poorly understood. This assumption can be violated by the biodegradation of contaminants, thereby leading to a bias in subsequent interpretations. We provide an analytical expression from which the prediction error due to attenuating concentrations can be evaluated and show its dependence on the degradation rate, the degradation function, and the extent of the well capture zone. Even for considerable first-order degradation, the mass flow error generally remains small and is not magnified in estimates of ?.

Jarsjö, Jerker; Bayer-Raich, Martí

2008-02-01

255

Detecting Identity by Descent and Estimating Genotype Error Rates in Sequence Data

Existing methods for identity by descent (IBD) segment detection were designed for SNP array data, not sequence data. Sequence data have a much higher density of genetic variants and a different allele frequency distribution, and can have higher genotype error rates. Consequently, best practices for IBD detection in SNP array data do not necessarily carry over to sequence data. We present a method, IBDseq, for detecting IBD segments in sequence data and a method, SEQERR, for estimating genotype error rates at low-frequency variants by using detected IBD. The IBDseq method estimates probabilities of genotypes observed with error for each pair of individuals under IBD and non-IBD models. The ratio of estimated probabilities under the two models gives a LOD score for IBD. We evaluate several IBD detection methods that are fast enough for application to sequence data (IBDseq, Beagle Refined IBD, PLINK, and GERMLINE) under multiple parameter settings, and we show that IBDseq achieves high power and accuracy for IBD detection in sequence data. The SEQERR method estimates genotype error rates by comparing observed and expected rates of pairs of homozygote and heterozygote genotypes at low-frequency variants in IBD segments. We demonstrate the accuracy of SEQERR in simulated data, and we apply the method to estimate genotype error rates in sequence data from the UK10K and 1000 Genomes projects. PMID:24207118

Browning, Brian L.; Browning, Sharon R.

2013-01-01

256

This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799

Doherty, Carole; Stavropoulou, Charitini

2012-07-01

257

. . . . . . . . . . . . . . . 77 4. Leave-one-out Error Estimator . . . . . . . . . . . . . . . 79 B. Second-order Double Asymptotic Approximation . . . . . . . 79 1. Second-order Approximations . . . . . . . . . . . . . . . . 80 2. Actual Classi cation Error... + 1; : : : ; n0 + n1 used to derive the classi er in (2.3). Then P ("^r=0; ^ ? (a; b); ^0 > ^1) = P (Z1 > 0) P ("^r=0; ^ /? (a; b); ^0 ^1) = P (Z4 > 0) P ("^r=0) = P (Z4 > 0) + P (Z4 < 0) (2...

Zollanvari, Amin

2012-02-14

258

Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

NASA Technical Reports Server (NTRS)

This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

2002-01-01

259

Systematic study of error sources in supersonic skin-friction balance measurements

NASA Technical Reports Server (NTRS)

An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.

Allen, J. M.

1976-01-01

260

Background Medication administration errors are frequent and lead to patient harm. Interruptions during medication administration have been implicated as a potential contributory factor. Objective To assess evidence of the effectiveness of interventions aimed at reducing interruptions during medication administration on interruption and medication administration error rates. Methods In September 2012 we searched MEDLINE, EMBASE, CINAHL, PsycINFO, Cochrane Effective Practice and Organisation of Care Group reviews, Google and Google Scholar, and hand searched references of included articles. Intervention studies reporting quantitative data based on direct observations of at least one outcome (interruptions, or medication administration errors) were included. Results Ten studies, eight from North America and two from Europe, met the inclusion criteria. Five measured significant changes in interruption rates pre and post interventions. Four found a significant reduction and one an increase. Three studies measured changes in medication administration error rates and showed reductions, but all implemented multiple interventions beyond those targeted at reducing interruptions. No study used a controlled design pre and post. Definitions for key outcome indicators were reported in only four studies. Only one study reported ? scores for inter-rater reliability and none of the multi-ward studies accounted for clustering in their analyses. Conclusions There is weak evidence of the effectiveness of interventions to significantly reduce interruption rates and very limited evidence of their effectiveness to reduce medication administration errors. Policy makers should proceed with great caution in implementing such interventions until controlled trials confirm their value. Research is also required to better understand the complex relationship between interruptions and error to support intervention design. PMID:23980188

Raban, Magdalena Z; Westbrook, Johanna I

2014-01-01

261

Computation of the factorized error covariance of the difference between correlated estimators

NASA Technical Reports Server (NTRS)

A state estimation problem where some of the measurements may be common to two or more data sets is considered. Two approaches for computing the error covariance of the difference between filtered estimates (for each data set) are discussed. The first algorithm is based on postprocessing of the Kalman gain profiles of two correlated estimators. It uses UD factors of the covariance of the relative error. The second algorithm uses a square root information filter applied to relative error analysis. In the absence of process noise, the square root information filter is computationally more efficient and more flexible than the Kalman gain (covariance update) method. Both the algorithms (covariance and information matrix based) are applied to a Venus orbiter simulation, and their performances are compared.

Wolff, Peter J.; Mohan, Srinivas N.; Stienon, Francis M.; Bierman, Gerald J.

1990-01-01

262

Errors in ultrasonic scatterer size estimates due to phase and amplitude aberration

NASA Astrophysics Data System (ADS)

Current ultrasonic scatterer size estimation methods assume that acoustic propagation is free of distortion due to large-scale variations in medium attenuation and sound speed. However, it has been demonstrated that under certain conditions in medical applications, medium inhomogeneities can cause significant field aberrations that lead to B-mode image artifacts. These same aberrations may be responsible for errors in size estimates and parametric images of scatterer size. This work derives theoretical expressions for the error in backscatter coefficient and size estimates as a function of statistical parameters that quantify phase and amplitude aberration, assuming a Gaussian spatial autocorrelation function. Results exhibit agreement with simulations for the limited region of parameter space considered. For large values of aberration decorrelation lengths relative to aberration standard deviations, phase aberration errors appear to be minimal, while amplitude aberration errors remain significant. Implications of the results for accurate backscatter and size estimation are discussed. In particular, backscatter filters are suggested as a method for error correction. Limitations of the theory are also addressed. The approach, approximations, and assumptions used in the derivation are most appropriate when the aberrating structures are relatively large, and the region containing the inhomogeneities is offset from the insonifying transducer.

Gerig, Anthony; Zagzebski, James

2004-06-01

263

NASA Astrophysics Data System (ADS)

In MRI-guided diffuse optical tomography of the human brain function, three-dimensional anatomical head model consisting of up to five segmented tissue types can be specified. With disregard to misclassification between different tissues, uncertainty in the optical properties of each tissue type becomes the dominant cause of systematic error in image reconstruction. In this study we present a quantitative evaluation of image resolution dependence due to such uncertainty. Our results show that given a head model which provides a realistic description of its tissue optical property distribution, high-density diffuse optical tomography with cortically constrained image reconstruction are capable of detecting focal activation up to 21.81 mm below the human scalp at an imaging quality better than or equal to 1.0 cm in localization error and 1.0 cm3 in FVHM with a tolerance of uncertainty in tissue optical properties between +15% and -20%.

Zhan, Yuxuan; Eggebrecht, Adam; Dehghani, Hamid; Culver, Joseph

2011-02-01

264

Estimation of bias errors in measured airplane responses using maximum likelihood method

NASA Technical Reports Server (NTRS)

A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.

Klein, Vladiaslav; Morgan, Dan R.

1987-01-01

265

NASA Astrophysics Data System (ADS)

Inexpensive devices to measure solar UV irradiance are available to monitor atmospheric ozone, for example, total ozone portable spectroradiometers (TOPS instruments). A procedure to convert these measurements into ozone estimates is examined. For well-characterized filters with 7-nm FWHM bandpasses, the method provides ozone values (from 304- and 310-nm channels) with less than 0.4 error attributable to inversion of the theoretical model. Analysis of sensitivity to model assumptions and parameters yields estimates of 3 bias in total ozone results with dependence on total ozone and path length. Unmodeled effects of atmospheric constituents and instrument components can result in additional 2 errors.

Flynn, Lawrence E.; Labow, Gordon J.; Beach, Robert A.; Rawlins, Michael A.; Flittner, David E.

1996-10-01

266

ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve?

In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

Feischl, Michael; Fuhrer, Thomas; Karkulik, Michael; Praetorius, Dirk

2014-01-01

267

Systematic Errors in Stereo PIV When Imaging through a Glass Window

NASA Technical Reports Server (NTRS)

This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

Green, Richard; McAlister, Kenneth W.

2004-01-01

268

NASA Astrophysics Data System (ADS)

Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 ?g l-1 for training; 106.13 ?g l-1 for validation) outperforms the BPNN (RMSE: 121.54 ?g l-1 for training; 143.37 ?g l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 ?g l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.

Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

2013-08-01

269

NASA Technical Reports Server (NTRS)

Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

2009-01-01

270

BACKGROUND: Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset

Rachel S Schwartz; Rachel L Mueller

2010-01-01

271

Estimation of aboveground biomass, and associated carbon storage, of forested areas has been one of the key applications for airborne lidar remote sensing. While numerous regression based analyses have shown the capability of lidar data for these purposes, estimates of total aboveground biomass for entire landscapes have been les frequent. To create useful estimates for policy purposes, confidence intervals for

K. Sherrill; M. Lefsky; J. Battles; K. Waring; P. Gonzalez

2006-01-01

272

NASA Astrophysics Data System (ADS)

Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.

Vannière, Benoît; Guilyardi, Eric; Toniazzo, Thomas; Madec, Gurvan; Woolnough, Steve

2014-10-01

273

NASA Astrophysics Data System (ADS)

In this paper we derive a posteriori error estimates for the compositional model of multiphase Darcy flow in porous media, consisting of a system of strongly coupled nonlinear unsteady partial differential and algebraic equations. We show how to control the dual norm of the residual augmented by a nonconformity evaluation term by fully computable estimators. We then decompose the estimators into the space, time, linearization, and algebraic error components. This allows to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver when the corresponding error components do not affect significantly the overall error. Moreover, the spatial and temporal error components can be balanced by time step and space mesh adaptation. Our analysis applies to a broad class of standard numerical methods, and is independent of the linearization and of the iterative algebraic solvers employed. We exemplify it for the two-point finite volume method with fully implicit Euler time stepping, the Newton linearization, and the GMRes algebraic solver. Numerical results on two real-life reservoir engineering examples confirm that significant computational gains can be achieved thanks to our adaptive stopping criteria, already on fixed meshes, without any noticeable loss of precision.

Di Pietro, Daniele A.; Flauraud, Eric; Vohralík, Martin; Yousef, Soleiman

2014-11-01

274

On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)

NASA Astrophysics Data System (ADS)

Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

Huffman, G. J.

2013-12-01

275

Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

NASA Astrophysics Data System (ADS)

Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 ? error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 ? errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

2014-10-01

276

Estimating genotype error rates from high-coverage next-generation sequence data.

Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. PMID:25304867

Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

2014-11-01

277

Error-entropy based channel state estimation of spatially correlated MIMO-OFDM

This paper deals with optimized training sequences to estimate multiple-input multiple-output orthogonal frequency- division multiplexing (MIMO-OFDM) channel states in the presence of spatial fading correlations. The optimization criterion is the entropy minimization of the error between the high multi- dimensional and correlated channel state and its estimator. The globally optimized training sequences are exactly solved by a semi-definite programming (SDP)

H. D. Tuan; H. H. Kha; H. H. Nguyen

2011-01-01

278

NASA Astrophysics Data System (ADS)

The authors have proposed a basic framework and a method to evaluate uncertainty involved in spatial variability of geotechnical parameters and resulting statistical estimation error in geotechnical reliability analysis. The soil profile is model by a stationary random field, which is simplification and idealization of the real ground. The local average at an arbitrary point in the random field is estimated from limited samples obtained from the same filed. The two estimation methods are proposed, namely the general estimation and the local estimation The relative positions of samplings and of the structures are not considered in the former, whereas they are considered in the latter. The theory is merely idealization and simplification of the real problem. The theory is verified based on intensive cone penetration tests at three sites. The method is basically cross verification which compares the estimated estimation variance with calculated true values based on the intensive CPT measurements.

Honjo, Yusuke; Otake, Yu

279

We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M[minus]1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within the basic structure of the phase gradient autofocus (PGA) algorithm, replacing the original phase-estimation kernel. We show that, in practice, the new algorithm provides excellent restorations to defocused SAR imagery, typically in only one or two iterations. The performance of the new phase estimator is demonstrated essentially to achieve the Cramer--Rao lower bound on estimation-error variance for all but very small values of target-to-clutter ratio. We also show that for the case in which M is equal to 2, the ML estimator is similar to that of the original PGA method but achieves better results in practice, owing to a bias inherent in the original PGA phase-estimation kernel. Finally, we discuss the relationship of these algorithms to the shear-averaging and spatial-correlation methods, two other phase-correction techniques that utilize the same phase-estimation kernel but that produce substantially poorer performance because they do not employ several fundamental signal-processing steps that are critical to the algorithms of the PGA class.

Jakowatz, C.V. Jr.; Wahl, D.E. (Sandia National Laboratories, Albuquerque, New Mexico 87815 (United States))

1993-12-01

280

Summary In this paper we describe a method for the estimation of global errors. An heuristic condition of validity of the method is given and several applications are described in detail for problems of ordinary differential equations with either initial or two point boundary conditions solved by finite difference formulas. The main idea of the method can be extended to

Pedro E. Zadunaisky

1976-01-01

281

This paper studies error propagation and parameter sensitivity based on a torque estimation model for induction machines. The model is based on the equation describing the interaction of rotor flux linkage and rotor currents. Contrary to classical schemes for induction motor control this is an open loop scheme, however, the model still requires different machine parameters. Therefore the parameters sensitivity

C. Bastiaensen; W. Deprez; W. Symens; J. Driesen

2006-01-01

282

Estimation Error Guarantees for Poisson Denoising with Sparse and Structured Dictionary Models

detector in an array may be modeled using Poisson-distributed random variables, with unknown ratesEstimation Error Guarantees for Poisson Denoising with Sparse and Structured Dictionary Models Akshay Soni and Jarvis Haupt Department of Electrical and Computer Engineering University of Minnesota

Haupt, Jarvis

283

Bayesian-based hypothesis testing for topology error identification in generalized state estimation

This paper develops a Bayesian-based hypothesis testing procedure to be applied in conjunction with topology error processing via normalized Lagrange multipliers. As an advantage over previous methods, the proposed approach eliminates the need of repeated state estimator runs for alternative hypothesis evaluation. The identification process assumes that the set of switching devices is partitioned into suspect and true subsets. A

Elizete Maria Lourenço; Antonio Simões Costa; Kevin A. Clements

2004-01-01

284

A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

ERIC Educational Resources Information Center

Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

2011-01-01

285

Estimation of the Phase-Error Function for Autofocussing of SAR Raw Data

in the last years. The most advanced techniques are derived from the Phase-Gradient Algorithm (PGA) [1 available in the illuminated scene, the PGA algorithm uses so called strong reflectors instead, each of which consists of many joint point targets. Hence the phase-error function estimated by PGA

286

Accurate Bit-Error-Rate Estimation for Efficient High Speed I/O Testing

test by eliminating the conventional BER measurement process. We will discuss two different versions reliability for most multi-gigabit communication standards, is test-time prohibitive [1]. In this paper, weAccurate Bit-Error-Rate Estimation for Efficient High Speed I/O Testing Dongwoo Hong and Kwang

287

ERIC Educational Resources Information Center

The purpose of this study was to investigate the relative appropriateness of several procedures for estimating reliability and standard errors of measurement of complex reading comprehension tests. Seven generalizability theory models were conceptualized by incorporating one or several factors of items, passages, themes, contents, and types of…

Lee, Guemin

288

Weighted least-squares estimation of phase errors for SAR\\/ISAR autofocus

A new method of phase error estimation that utilizes the weighted least-squares (WLS) algorithm is presented for synthetic aperture radar (SAR)\\/inverse SAR (ISAR) autofocus applications. The method does not require that the signal in each range bin be of a certain distribution model, and thus it is robust for many kinds of scene content. The most attractive attribute of the

Wei Ye; Tat Soon Yeo; Zheng Bao

1999-01-01

289

1 Self-learning Cooperative Transmission - Coping with Mobility, Channel Estimation Errors chain topology. In this paper, we propose a low-overhead self-learning cooperative transmission scheme--For cooperative transmission in wireless net- works, a very challenging problem is to combine informa- tion from

Sun, Yan Lindsay

290

Development of measures to estimate truncation error in fault tree analysis

The fault tree quantification uncertainty from the truncation error has been of great concern for the reliability evaluation of large fault trees in the probabilistic safety analysis (PSA) of nuclear plants. The truncation limit is used to truncate cut sets of the gates when quantifying the fault trees. This paper presents measures to estimate the probability of the truncated cut

Woo Sik Jung; Joon-Eon Yang; Jaejoo Ha

2005-01-01

291

Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia

Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia William W. Hsieh # , Yuval, Jingyang Li Department of Earth and Ocean Sciences, University of British--August Columbia River streamÂ flow at Donald, British Columbia, Canada. Using predictors up to the end of November

Hsieh, William

292

Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia

Seasonal prediction with error estimation of the Columbia River streamflow in British Columbia William W. Hsieh , Yuval, Jingyang Li Department of Earth and Ocean Sciences, University of BritishÂAugust Columbia River stream- flow at Donald, British Columbia, Canada. Using predictors up to the end of November

Hsieh, William

293

Interval Estimation for True Raw and Scale Scores under the Binomial Error Model

ERIC Educational Resources Information Center

Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

2006-01-01

294

Discriminant Analysis1 Amin Zollanvari, Student Member, Ulisses M. Braga-Neto, Member, and Edward R. Dougherty% of which were published in journals having impact factor larger than 6, and report that 50% of them are flawed. This state of affairs can be attributed in part to the misuse of error estimation techniques

Braga-Neto, Ulisses

295

In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...

Mathur, Anuj

2012-06-07

296

Error and bias in under-5 mortality estimates derived from birth histories with small sample sizes

Background Estimates of under-5 mortality at the national level for countries without high-quality vital registration systems are routinely derived from birth history data in censuses and surveys. Subnational or stratified analyses of under-5 mortality could also be valuable, but the usefulness of under-5 mortality estimates derived from birth histories from relatively small samples of women is not known. We aim to assess the magnitude and direction of error that can be expected for estimates derived from birth histories with small samples of women using various analysis methods. Methods We perform a data-based simulation study using Demographic and Health Surveys. Surveys are treated as populations with known under-5 mortality, and samples of women are drawn from each population to mimic surveys with small sample sizes. A variety of methods for analyzing complete birth histories and one method for analyzing summary birth histories are used on these samples, and the results are compared to corresponding true under-5 mortality. We quantify the expected magnitude and direction of error by calculating the mean error, mean relative error, mean absolute error, and mean absolute relative error. Results All methods are prone to high levels of error at the smallest sample size with no method performing better than 73% error on average when the sample contains 10 women. There is a high degree of variation in performance between the methods at each sample size, with methods that contain considerable pooling of information generally performing better overall. Additional stratified analyses suggest that performance varies for most methods according to the true level of mortality and the time prior to survey. This is particularly true of the summary birth history method as well as complete birth history methods that contain considerable pooling of information across time. Conclusions Performance of all birth history analysis methods is extremely poor when used on very small samples of women, both in terms of magnitude of expected error and bias in the estimates. Even with larger samples there is no clear best method to choose for analyzing birth history data. The methods that perform best overall are the same methods where performance is noticeably different at different levels of mortality and lengths of time prior to survey. At the same time, methods that perform more uniformly across levels of mortality and lengths of time prior to survey also tend to be among the worst performing overall. PMID:23885746

2013-01-01

297

Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

2013-01-01

298

The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

Nash, Ulrik W.

2014-01-01

299

Assessing Ensemble Filter Estimates of the Analysis Error Distribution of the Day

NASA Astrophysics Data System (ADS)

Ensemble data assimilation algorithms (e.g., the Ensemble Kalman Filter) are often purported to return an estimate of the "analysis error distribution of the day"; a measure of the variability in the analysis that is consistent with the current state of the system. In this presentation, we demonstrate that in the presence of non-linearity this is not, in fact, the case. The true error distribution of the day given today's observations consists of the Bayesian posterior PDF formed via the conjunction of the prior forecast error distribution with the likelihood error distribution constructed from the observations of the day. In actuality, ensemble data assimilation algorithms return an estimate of the analysis error integrated over all prior realizations of the observations of the day. The result is consistent with the true posterior analysis uncertainty (as returned by a solution to Bayes) if the likelihood distribution produced by the observations of the day is approximately equal to the likelihood distribution integrated over all possible observations (or equivalently innovations).

Posselt, D. J.; Hodyss, D.; Bishop, C. H.

2013-12-01

300

Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by glueing together WKB and Airy solutions of corresponding one-dimensional Schrodinger equations. Our method is motivated by and has applications to the analysis of linear wave equations in the geometry of a rotating black hole.

Felix Finster; Joel Smoller

2008-07-28

301

Sensitivity of Satellite Rainfall Estimates Using a Multidimensional Error Stochastic Model

NASA Astrophysics Data System (ADS)

Error propagation models of satellite precipitation fields are a key element in the response and performance of hydrological models, which depend on the reliability and availability of rainfall data. However, most of these models treat the error as an unidimensional measurement, with no consideration of the type of process involved. The limitations of unidimensional error propagation models were overcome by multidimensional error propagation stochastic models. In this study, the SREM2D (A Two-Dimensional Satellite Rainfall Error Model) was used to simulate satellite precipitation fields by inverse calibration parameters based on real data called "reference", in this case, gauge rainfall data. Sensitivity of satellite rainfall estimates from different satellite-based algorithms were investigated to be used for hydrologic simulation over the Tocantins basin, a transition area between the Amazon basin and the relative drier northeast region, using the SREM2D error propagation model. Preliminary results show that SREM2D has the potential to generate realistic ensembles of satellite rain fields to feed hydrologic models. Ongoing research is focused on the impact of rainfall ensembles simulated by SREM2D for the hydrologic modeling using the Model of Large Basin of the National Institute For Space Research (MGB-INPE) developed for Brazilian basins.

Falck, A. S.; Vila, D. A.; Tomasella, J.

2011-12-01

302

Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

Ringler, A. T.; Hutt, C. R.; Aster, R.; Bolton, H.; Gee, L. S.; Storm, T.

2012-01-01

303

Errors in visual estimation of flexion contractures during total knee arthroplasty

AIM: To quantify and reduce the errors in visual estimation of knee flexion contractures during total knee arthroplasty (TKA). METHODS: This study was divided into two parts: Quantification of error and reduction of error. To quantify error, 3 orthopedic surgeons visually estimated preoperative knee flexion contractures from lateral digital images of 23 patients prior to and after surgical draping. A repeated-measure analysis of variance was used to compare the estimated angles prior to and following the placement of the surgical drapes with the true knee angle measured with a long-arm goniometer. In an effort to reduce the error of visual estimation, a dual set of inclinometers was developed to improve intra-operative measurement of knee flexion contracture during TKA. A single surgeon performed 6 knee extension measurements with the device during 146 consecutive TKA cases. Three measurements were taken with the desired tibial liner trial thickness, and 3 were taken with a trial that was 2 mm thicker. An intraclass correlation coefficient (ICC) was calculated to assess the test-retest reliability for the 3 measurements taken with the desired liner thickness, and a paired t test was used to determine if the knee extension measurements differed when a thicker tibial trial liner was placed. RESULTS: The surgeons significantly overestimated flexion contractures in 23 TKAs prior to draping and significantly underestimated the contractures after draping (actual knee angle = 6.1° ± 6.4°, pre-drape estimate = 6.9° ± 6.8°, post-drape estimate = 4.3° ± 6.1°, P = 0.003). Following the development and application of the measurement devices, the measurements were highly reliable (ICC = 0.98), and the device indicated that 2.7° ± 2.2° of knee extension was lost with the insertion of a 2 mm thicker tibial liner. The device failed to detect a difference in knee extension angle with the insertion of the 2 mm thicker liner in 9/146 cases (6.2%). CONCLUSION: We determined the amount of error associated with visual estimation of knee flexion contractures, and developed a simple, reliable device and method to improve feedback related to sagittal alignment during TKA. PMID:23878779

Jacobs, Cale A; Christensen, Christian P; Hester, Peter W; Burandt, David M; Sciascia, Aaron D

2013-01-01

304

Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS) for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM) algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS), and the constrained expectation and maximization (CEM). We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their exposures observed with error. However, compared with CEM, CGBS is easier to implement and has more desirable bias-reducing properties in the presence of substantial proportions of missing exposure data. Conclusion The CGBS approach could be useful for estimating exposure-disease association in semi-ecological studies when the true group means are ordered and the number of measured exposures in each group is small. These findings have important implication for cost-effective design of semi-ecological studies because they enable investigators to more reliably estimate exposure-disease associations with smaller exposure measurement campaign than with the analytical methods that were historically employed. PMID:22947254

2012-01-01

305

A variational method for finite element stress recovery and error estimation

NASA Technical Reports Server (NTRS)

A variational method for obtaining smoothed stresses from a finite element derived nonsmooth stress field is presented. The method is based on minimizing a functional involving discrete least-squares error plus a penalty constraint that ensures smoothness of the stress field. An equivalent accuracy criterion is developed for the smoothing analysis which results in a C sup 1-continuous smoothed stress field possessing the same order of accuracy as that found at the superconvergent optimal stress points of the original finite element analysis. Application of the smoothing analysis to residual error estimation is also demonstrated.

Tessler, A.; Riggs, H. R.; Macy, S. C.

1993-01-01

306

Estimation of the generalization ability of a classification or regression model is an important issue, as it indicates the expected performance on previously unseen data and is also used for model selection. Currently used generalization error estimation procedures, such as cross-validation (CV) or bootstrap, are stochastic and, thus, require multiple repetitions in order to produce reliable results, which can be computationally expensive, if not prohibitive. The correntropy-inspired density-preserving sampling (DPS) procedure proposed in this paper eliminates the need for repeating the error estimation procedure by dividing the available data into subsets that are guaranteed to be representative of the input dataset. This allows the production of low-variance error estimates with an accuracy comparable to 10 times repeated CV at a fraction of the computations required by CV. This method can also be used for model ranking and selection. This paper derives the DPS procedure and investigates its usability and performance using a set of public benchmark datasets and standard classifiers. PMID:24808204

Budka, Marcin; Gabrys, Bogdan

2013-01-01

307

We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times. PMID:20213705

Xiao, Yongling; Abrahamowicz, Michal

2010-03-30

308

Mass load estimation errors utilizing grab sampling strategies in a karst watershed

Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

Fogle, A. W.; Taraba, J. L.; Dinger, J. S.

2003-01-01

309

On the error in crop acreage estimation using satellite (LANDSAT) data

NASA Technical Reports Server (NTRS)

The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.

Chhikara, R. (principal investigator)

1983-01-01

310

Evaluation of Temporal and Spatial Distribution of Error in Modeled Evapotranspiration Estimates

NASA Astrophysics Data System (ADS)

Evapotranspiration (ET) constitutes a significant portion of Florida's water budget, and is second only to rainfall. Consequently, accurate ET estimates are very important for hydrologic modeling work. However, in comparison to rainfall, relatively few ground stations exist for the measurement of this important model input. Consequently, ET estimates produced by models are often subject to error. Satellite-based ET estimates provide an unprecedented opportunity to measure actual ET in sparsely monitored watersheds. They also provide a basis for comparing errors in modeled actual ET estimates that are induced due to the following reasons: 1) spatial interpolation and data-filling methods; 2) inaccurate and sparse meteorological data; and, 3) simplified parameterization schemes. In this study, satellite-based daily actual ET estimates from the Water Conservation Area 3 (WCA-3) watershed in South Florida, USA, are compared with those obtained from a calibrated finite-volume regional hydrologic model for the 1998 and 1999 calendar years. The satellite-based ET estimates used in this study compared well with measured ground-based actual ET data. The WCA-3 watershed is an integral part of Florida's remnant Everglades, and covers an area of approximately 2,400 square kilometers. It is compartmentalized by several levees and road embankments, and drained by several major canals. It also serves as a major habitat for many wildlife species, a source for urban water supply and an emergency storage area for flood water. The WCA-3 is located east of the Big Cypress National Preserve, and north of the Everglades National Park. Despite its significance, WCA-3 has relatively few ET monitoring stations and meteorological stations. Consequently, it is ideally suited for evaluating and quantifying errors in simulated actual ET estimates. The Regional Simulation Model (RSM) developed by the South Florida Water Management District is used for the modeling of these ET estimates. The RSM is an implicit, finite-volume, continuous, distributed, integrated surface/ground-water model, capable of simulating one-dimensional canal/stream flow and two-dimensional overland flow in arbitrarily shaped areas using a variable triangular mesh. The RSM has several options for modeling actual ET. An empirical parameterization scheme that is dependent on land-cover, water-depth and potential ET is used in this study for estimating actual ET. The parameter-sensitivities of this scheme are investigated and analyzed for several predominant land-cover classes, and dry- and wet-soil conditions. The RSM is calibrated and verified using historical time-series data from 1988 to 1995, and 1996 to 2000, respectively. All sensitivity and error analyses are conducted using estimates from the verification period.

Senarath, S. U.

2004-12-01

311

Error estimation for moment analysis in heavy-ion collision experiment

NASA Astrophysics Data System (ADS)

Higher moments of conserved quantities are predicted to be sensitive to the correlation length and connected to the thermodynamic susceptibility. Thus, higher moments of net-baryon, net-charge and net-strangeness have been extensively studied theoretically and experimentally to explore phase structure and bulk properties of QCD matters created in a heavy-ion collision experiment. As the higher moment analysis is a statistic hungry study, the error estimation is crucial to extract physics information from the limited experimental data. In this paper, we will derive the limit distributions and error formula based on the delta theorem in statistics for various order moments used in the experimental data analysis. The Monte Carlo simulation is also applied to test the error formula.

Luo, Xiaofeng

2012-02-01

312

When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.

Rozo, Eduardo; /U. Chicago /Chicago U., KICP; Wu, Hao-Yi; /KIPAC, Menlo Park; Schmidt, Fabian; /Caltech

2011-11-04

313

The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2(4) full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors' impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors' influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world. PMID:25151444

Strömberg, Sten; Nistor, Mihaela; Liu, Jing

2014-11-01

314

There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS) testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ? n ? 8) of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]). When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93–0.98) and coefficients of variation (3.4–7.0%). The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001) and higher (p < 0.001), respectively, than those obtained using other methods. The Bland-Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9–10 using a relatively large sample size (i.e., ? 24) could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors.

Pekünlü, Ekim; Özsu, ?lbilge

2014-01-01

315

NASA Astrophysics Data System (ADS)

Two clinical intensity modulated radiotherapy plans were selected. Eleven plan variations were created with systematic errors introduced: Multi-Leaf Collimator (MLC) positional errors with all leaf pairs shifted in the same or the opposite direction, and collimator rotation offsets. Plans were measured using an Electronic Portal Imaging Device (EPID) and an ionisation chamber array. The plans were evaluated using gamma analysis with different criteria. The gamma pass rates remained around 95% or higher for most cases with MLC positional errors of 1 mm and 2 mm with 3%/3mm criteria. The ability of both devices to detect delivery errors was similar.

Bawazeer, Omemh; Gray, Alison; Arumugam, Sankar; Vial, Philip; Thwaites, David; Descallar, Joseph; Holloway, Lois

2014-03-01

316

NASA Technical Reports Server (NTRS)

Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

2000-01-01

317

The uncertainty in NAPL volume estimates obtained through partitioning tracers can be quantified as a function of random errors in volume and concentration measurements when moments are calculated from experimentally measured breakthrough curves using the trapezoidal rule for numerical integration. The methodology is based upon standard stochastic methods for random error propagation. Monte Carlo simulations using a synthetic data set derived from the one-dimensional solution of the advective-dispersive equation serve to verify the process. It is shown that the uncertainty in NAPL volume predictions nonlinearly increases as the retardation factor decreases. An important result of this observation is that there is a large degree of uncertainty in using partitioning tracers to conclude NAPL is absent from the swept zone. Under the conditions investigated, random errors in concentration measurements are shown to have a greater impact on NAPL volume uncertainty than random errors in volume measurements, and it is also shown that uncertainty in NAPL volume decreases as the resolution of the breakthrough curves increases. The impact of uncertainty in background retardation (i.e., sorption of partitioning tracers in the absence of NAPL) was also investigated, and it likewise indicated that the relative uncertainty in NAPL volume estimates increases as the retardation factor decreases. PMID:16201645

Brooks, Michael C; Wise, William R

2005-09-15

318

NASA Astrophysics Data System (ADS)

paper appraises two approaches for the treatment of heteroscedasticity and autocorrelation in residual errors of hydrological models. Both approaches use weighted least squares (WLS), with heteroscedasticity modeled as a linear function of predicted flows and autocorrelation represented using an AR(1) process. In the first approach, heteroscedasticity and autocorrelation parameters are inferred jointly with hydrological model parameters. The second approach is a two-stage "postprocessor" scheme, where Stage 1 infers the hydrological parameters ignoring autocorrelation and Stage 2 conditionally infers the heteroscedasticity and autocorrelation parameters. These approaches are compared to a WLS scheme that ignores autocorrelation. Empirical analysis is carried out using daily data from 12 US catchments from the MOPEX set using two conceptual rainfall-runoff models, GR4J, and HBV. Under synthetic conditions, the postprocessor and joint approaches provide similar predictive performance, though the postprocessor approach tends to underestimate parameter uncertainty. However, the MOPEX results indicate that the joint approach can be nonrobust. In particular, when applied to GR4J, it often produces poor predictions due to strong multiway interactions between a hydrological water balance parameter and the error model parameters. The postprocessor approach is more robust precisely because it ignores these interactions. Practical benefits of accounting for error autocorrelation are demonstrated by analyzing streamflow predictions aggregated to a monthly scale (where ignoring daily-scale error autocorrelation leads to significantly underestimated predictive uncertainty), and by analyzing one-day-ahead predictions (where accounting for the error autocorrelation produces clearly higher precision and better tracking of observed data). Including autocorrelation into the residual error model also significantly affects calibrated parameter values and uncertainty estimates. The paper concludes with a summary of outstanding challenges in residual error modeling, particularly in ephemeral catchments.

Evin, Guillaume; Thyer, Mark; Kavetski, Dmitri; McInerney, David; Kuczera, George

2014-03-01

319

When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

2013-01-01

320

NASA Astrophysics Data System (ADS)

In this paper we further develop the Method of Nearby Problems (MNP) for generating exact solutions to realistic partial differential equations by extending it to two dimensions. We provide an extensive discussion of the 2D spline fitting approach which provides Ck continuity (continuity of the solution value and its first k derivatives) along spline boundaries and is readily extendable to higher dimensions. A detailed one-dimensional example is given to outline the general concepts, then the two-dimensional spline fitting approach is applied to two problems: heat conduction with a distributed source term and the viscous, incompressible flow in a lid-driven cavity with both a constant lid velocity and a regularized lid velocity (which removes the strong corner singularities). The spline fitting approach results in very small spline fitting errors for the heat conduction problem and the regularized driven cavity, whereas the fitting errors in the standard lid-driven cavity case are somewhat larger due to the singular behaviour of the pressure near the driven lid. The MNP approach is used successfully as a discretization error estimator for the driven cavity cases, outperforming Richardson extrapolation which requires two grid levels. However, MNP has difficulties with the simpler heat conduction case due to the discretization errors having the same magnitude as the spline fitting errors.

Roy, Christopher J.; Sinclair, Andrew J.

2009-03-01

321

A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

NASA Technical Reports Server (NTRS)

A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

Tessler, A.; Riggs, H. R.; Dambach, M.

1998-01-01

322

NASA Astrophysics Data System (ADS)

The centroid-moment-tensor (CMT) algorithm provides a straightforward, rapid method for the determination of seismic source parameters from waveform data. As such, it has found widespread application, and catalogues of CMT solutions - particularly the catalogue maintained by the Global CMT Project - are routinely used by geoscientists. However, there have been few attempts to quantify the uncertainties associated with any given CMT determination: whilst catalogues typically quote a 'standard error' for each source parameter, these are generally accepted to significantly underestimate the true scale of uncertainty, as all systematic effects are ignored. This prevents users of source parameters from properly assessing possible impacts of this uncertainty upon their own analysis. The CMT algorithm determines the best-fitting source parameters within a particular modelling framework, but any deficiencies in this framework may lead to systematic errors. As a result, the minimum-misfit source may not be equivalent to the 'true' source. We suggest a pragmatic solution to uncertainty assessment, based on accepting that any 'low-misfit' source may be a plausible model for a given event. The definition of 'low-misfit' should be based upon an assessment of the scale of potential systematic effects. We set out how this can be used to estimate the range of values that each parameter might take, by considering the curvature of the misfit function as minimised by the CMT algorithm. This approach is computationally efficient, with cost similar to that of performing an additional iteration during CMT inversion for each source parameter to be considered. The source inversion process is sensitive to the various choices that must be made regarding dataset, earth model and inversion strategy, and for best results, uncertainty assessment should be performed using the same choices. Unfortunately, this information is rarely available when sources are obtained from catalogues. As already indicated by Valentine and Woodhouse (2010), researchers conducting comparisons between data and synthetic waveforms must ensure that their approach to forward-modelling is consistent with the source parameters used; in practice, this suggests that they should consider performing their own source inversions. However, it is possible to obtain rough estimates of uncertainty using only forward-modelling.

Valentine, Andrew P.; Trampert, Jeannot

2012-11-01

323

Recent advances in error estimation and adaptive improvement of finite element calculations

NASA Technical Reports Server (NTRS)

Recent advances in adaptive finite elements are summarized. General concepts behind a posteriori error estimation, h-method data management, and algorithms for fluid-mechanics applications are then examined, and some results of numerical experiments with new adaptive codes are given. Numerical examples include supersonic flow over a 20-deg ramp, supersonic flow in expansion corners, the reflecting-shock problem, and the rotating-cone problem. Finally, future directions of research in the field are outlined.

Oden, J. T.; Strouboulis, T.; Devloo, PH.; Howe, M.

1986-01-01

324

A Practical Approach to the Error Estimation of QuasiMonte Carlo Integrations

methods. We use (t; m; s)Ânet as a lowÂdiscrepancy point set. Let us recall the definition of (t; m; s. Definition 1 Let t and m be nonnegative integers and t Å¸ m. A (t; m; s)Ânet in base b is a set of b m points of the point set. In the following, statistical error estimation methods are explained in Sect. 2. In Sect. 3

Yamamoto, Hirosuke

325

The purpose of this paper is two-fold. First, we describe an estimation procedure that should be useful for spatial models which contain interactions between the dependent variables and autocorrelated error terms. Second, we apply that procedure to a spatial model relating to county police expenditures. Our estimation procedure does not require the specification of the error distribution, and its computational

Harry H. Kelejian; Dennis P. Robinson

1993-01-01

326

SUMMARY This paper describes the use of an a posteriori error estimator to control anisotropic mesh adaptation for computing inviscid compressible flows. The a posteriori error estimator and the coupling strategy with an anisotropic remesher are first introduced. The mesh adaptation is controlled by a single-parameter tolerance (TOL) in regions where the solution is regular, whereas a condition on the

Y. Bourgault; M. Picasso; F. Alauzet; A. Loseille

2009-01-01

327

Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding. PMID:24692025

Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

2014-06-01

328

NASA Astrophysics Data System (ADS)

A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

2012-06-01

329

Estimate of procession and polar motion errors from planetary encounter station location solutions

NASA Technical Reports Server (NTRS)

Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.

Pease, G. E.

1978-01-01

330

Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

NASA Technical Reports Server (NTRS)

Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

Abdol-Hamid, Khaled S.; Ghaffari, Farhad

2012-01-01

331

NASA Astrophysics Data System (ADS)

Mass redistribution within the Earth affects the position of its center of mass whose translation, relative to the International Terrestrial Reference Frame (ITRF), ranges from a few millimetres to centimetres. Satellite space geodetic techniques are able to detect such geocenter motion, Satellite Laser Ranging (SLR) being the most accurate in this respect, since it has produced a long history of valuable observations which are particularly sensitive to the origin of the reference frame. The most recent and updated ASI/CGS analyses of Lageos-1 and Lageos-2 SLR data span two decades and provide time series of fortnightly geocenter offsets with respect to the ITRF. Two different methods have been applied to retrieve the time series: a direct estimation of the degree one geopotential harmonics and a computation of Cartesian coordinate offsets from ITRF. Models and results, together with accuracies and spectral content, will be shown and discussed.

Luceri, V.; Sciarretta, C.; Bianco, G.

2004-12-01

332

Application of least squares variance component estimation to errors-in-variables models

NASA Astrophysics Data System (ADS)

In an earlier work, a simple and flexible formulation for the weighted total least squares (WTLS) problem was presented. The formulation allows one to directly apply the existing body of knowledge of the least squares theory to the errors-in-variables (EIV) models of which the complete description of the covariance matrices of the observation vector and of the design matrix can be employed. This contribution presents one of the well-known theories—least squares variance component estimation (LS-VCE)—to the total least squares problem. LS-VCE is adopted to cope with the estimation of different variance components in an EIV model having a general covariance matrix obtained from the (fully populated) covariance matrices of the functionally independent variables and a proper application of the error propagation law. Two empirical examples using real and simulated data are presented to illustrate the theory. The first example is a linear regression model and the second example is a 2-D affine transformation. For each application, two variance components—one for the observation vector and one for the coefficient matrix—are simultaneously estimated. Because the formulation is based on the standard least squares theory, the covariance matrix of the estimates in general and the precision of the estimates in particular can also be presented.

Amiri-Simkooei, A. R.

2013-11-01

333

NASA Astrophysics Data System (ADS)

Timely and reliable streamflow forecasting with acceptable accuracy is fundamental for flood response and risk management. However, streamflow forecasting models are subject to uncertainties from inputs, state variables, model parameters and structures. This has led to an ongoing development of methods for uncertainty quantification (e.g. generalized likelihood and Bayesian approaches) and methods for uncertainty reduction (e.g. sequential and variational data assimilation approaches). These two classes of methods are distinct yet related, e.g., the validity of data assimilation is essentially determined by the reliability of error specification. Error specification has been one of the most challenging areas in hydrologic data assimilation and there is a major opportunity for implementing uncertainty quantification approaches to inform both model and observation uncertainties. In this study, ensemble data assimilation methods are combined with the maximum a posteriori (MAP) error estimation approach to construct an integrated error estimation and data assimilation scheme for operational streamflow forecasting. We contrast the performance of two different data assimilation schemes: a lag-aware ensemble Kalman smoother (EnKS) and the conventional ensemble Kalman filter (EnKF). The schemes are implemented for a catchment upstream of Myrtleford in the Ovens river basin, Australia to assimilate real-time discharge observations into a conceptual catchment model, modèle du Génie Rural à 4 paramètres Horaire (GR4H). The performance of the integrated system is evaluated in both a synthetic forecasting scenario with observed precipitation and an operational forecasting scenario with Numerical Weather Prediction (NWP) forecast rainfall. The results show that the error parameters estimated by the MAP approach generates a reliable spread of streamflow prediction. Continuous state updating reduces uncertainty in initial states and thereby improves the forecasting accuracy significantly. The EnKS streamflow forecasts are more accurate and reliable than the EnKF for the synthetic scenario. They also alleviate instability in the EnKF due to overcorrection of current state variables. For the operational forecasting case, the forecasts benefit less from state updating and the difference between the EnKS and EnKF becomes less significant. This is because the uncertainty in the NWP rainfall forecasts becomes dominant with increasing lead time. Forecast discharge in 2010. Solid curves are observations and gray areas indicate 95% of probabilistic forecasts. (a) openloop ensemble spread based on the error parameters estimated by the MAP; (b) 60-h lead time forecasts based on the EnKS.

Li, Y.; Ryu, D.; Western, A. W.; Wang, Q.; Robertson, D.; Crow, W. T.

2013-12-01

334

Root mean square error of prediction (RMSEP) is widely used as a criterion for judging the performance of a multivariate calibration model; often it is even the sole criterion. Two methods are discussed for estimating the uncertainty in estimates of RMSEP. One method follows from the approximate sampling distribution of mean square error of prediction (MSEP) while the other one

1999-01-01

335

NASA Astrophysics Data System (ADS)

Even though the monitoring of solar radiation experienced a vast progress in the recent years both in terms of expanding the measurement networks and increasing the data quality, the number of stations is still too small to achieve accurate global coverage. Alternatively, various models for estimating solar radiation are exploited in many applications. Choosing a model is often limited by the availability of the meteorological parameters required for its running. In many cases the current values of the parameters are replaced with daily, monthly or even yearly average values. This paper deals with the evaluation of the error made in estimating global solar irradiance by using an average value of the Angstrom turbidity coefficient instead of its current value. A simple equation relating the relative variation of the global solar irradiance and the relative variation of the Angstrom turbidity coefficient is established. The theoretical result is complemented by a quantitative assessment of the errors made when hourly, daily, monthly or yearly average values of the Angstrom turbidity coefficient are used at the entry of a parametric solar irradiance model. The study was conducted with data recorded in 2012 at two AERONET stations in Romania. It is shown that the relative errors in estimating global solar irradiance (GHI) due to inadequate consideration of Angstrom turbidity coefficient may be very high, even exceeding 20%. However, when an hourly or a daily average value is used instead of the current value of the Angstrom turbidity coefficient, the relative errors are acceptably small, in general less than 5%. All results prove that in order to correctly reproduce GHI for various particular aerosol loadings of the atmosphere, the parametric models should rely on hourly or daily Angstrom turbidity coefficient values rather than on the more usual monthly or yearly average data, if currently measured data is not available.

Calinoiu, Delia-Gabriela; Stefu, Nicoleta; Paulescu, Marius; Trif-Tordai, Gavril?; Mares, Oana; Paulescu, Eugenia; Boata, Remus; Pop, Nicolina; Pacurar, Angel

2014-12-01

336

Two large-scale environmental surveys, the National Stream Survey (NSS) and the Environmental Protection Agency's proposed Environmental Monitoring and Assessment Program (EMAP), motivated investigation of estimators of the variance of the Horvitz-Thompson estimator under variabl...

337

NASA Astrophysics Data System (ADS)

Precipitation is a major component of the water cycle that returns atmospheric water to the ground. Without precipitation there would be no water cycle, all the water would run down the rivers and into the seas, then the rivers would dry up with no fresh water from precipitation. Although precipitation measurement seems an easy and simple procedure, it is affected by several systematic errors which lead to underestimation of the actual precipitation. Hence, precipitation measurements should be corrected before their use. Different correction approaches were already suggested in order to correct precipitation measurements. Nevertheless, focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this presentation we propose a Bayesian model average (BMA) approach for correcting rain gauge measurement errors. In the present study we used meteorological data recorded every 10 minutes at the Condoriri station in the Bolivian Andes. Comparing rain gauge measurements with totalisators rain measurements it was possible to estimate the rain underestimation. First, different deterministic models were optimized for the correction of precipitation considering wind effect and precipitation intensities. Then, probabilistic BMA correction was performed. The corrected precipitation was then separated into rainfall and snowfall considering typical Andean temperature thresholds of -1°C and 3°C. Hence, precipitation was separated into rainfall, snowfall and mixed precipitation. Then, relating the total snowfall with the glacier ice density, it was possible to estimate the glacier accumulation. Results show a yearly glacier accumulation of 1200 mm/year. Besides, results confirm that in tropical glaciers winter is not accumulation period, but a low ablation one. Results show that neglecting such correction may induce an underestimation higher than 35 % of total precipitation. Besides, the uncertainty range may induce differences up to 200 mm/year. This research is developed within the GRANDE project (Glacier Retreat impact Assessment and National policy Development), financed by SATREPS from JST-JICA.

Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier

2013-04-01

338

Optimum data weighting and error calibration for estimation of gravitational parameters

NASA Technical Reports Server (NTRS)

A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

Lerch, F. J.

1989-01-01

339

Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

NASA Technical Reports Server (NTRS)

Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

2013-01-01

340

Error Analysis for Estimation of Greenland Ice Sheet Accumulation Rates from InSAR Data

NASA Astrophysics Data System (ADS)

Forming a mass budget for the Greenland Ice Sheet requires accurate measurements of both accumulation and ablation. Currently, most mass budgets use accumulation rate data from sparse in-situ ice core data, sometimes in conjunction with results from relatively low-resolution climate models. Yet there have also been attempts to estimate accumulation rates from remote sensing data, including SAR, InSAR, and satellite radar scatterometry data. However, the sensitivities, error sources, and confidence intervals in these remote sensing methods have not been well-characterized. We develop an error analysis for estimates of Greenland Ice Sheet accumulation rates in the dry-snow zone using SAR brightness and InSAR coherence data. The estimates are generated by inverting a forward model based on firn structure and electromagnetic scattering. We can then examine the associated error bars and sensitivity. We also model how these change when spatial smoothness assumptions are introduced and a regularized inversion is used. In this study, we use SAR and InSAR data from the L-band ALOS-PALSAR instrument (23-centimeter carrier wavelength) as a test-bed and in-situ measurements published by Bales et.al. for comparison [1]. Finally, we use simulations to examine the ways in which estimation accuracy varies between X-band, C-band and L-band experiments. [1] R. C. Bales, et.al. 'Accumulation over the Greenland ice sheet from historical and recent records,' Journal of Geophysical Research, vol. 106, pp. 33813-33825, 2001.

Chen, A. C.; Zebker, H. A.

2013-12-01

341

DTI Quality Control Assessment via Error Estimation From Monte Carlo Simulations

Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing microscopic tissue structure in the white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC. PMID:23833547

Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

2013-01-01

342

A modelling error approach for the estimation of optical absorption in the presence of anisotropies.

Optical tomography is an emerging method for non-invasive imaging of human tissues using near-infrared light. Generally, the tissue is assumed isotropic, but this may not always be true. In this paper, we present a method for the estimation of optical absorption coefficient allowing the background to be anisotropic. To solve the forward problem, we model the light propagation in tissue using an anisotropic diffusion equation. The inverse problem consists of the estimation of the absorption coefficient based on boundary measurements. Generally, the background anisotropy cannot be assumed to be known. We treat the uncertainties in the background anisotropy parameter values as modelling error, and include this in our model and reconstruction. We present numerical examples based on simulated data. For reference, examples using an isotropic inversion scheme are also included. The estimates are qualitatively different for the two methods. PMID:15566175

Heino, Jenni; Somersalo, Erkki

2004-10-21

343

NASA Astrophysics Data System (ADS)

Predictions of the urban hydrologic response are of paramount importance to foresee floodings and sewer overflows and hence support sensible decision making. Due to several error sources models results are uncertain. Modeling statistically these uncertainties we can estimate how reliable predictions are. Most hydological studies in urban areas (e.g. Freni and Mannina, 2010) assume that residuals E are independent and identically distributed. These hypotheses are usually strongly violated due to neglected deficits in model structure and error in input data that lead to strong autocorrelation. We propose a new methodology to i) estimating the total uncertainty and ii) quantifying different type of errors affecting model results, videlicet, parametric, structural, input data, and calibration data uncertainty. Thereby we can make more realistic assumptions about the residuals. We consider the residual process to be a sum of an autocorrelated error term B and a memory-less uncertainty term E. As proposed by Reichert and Schuwirth (2012), B, called model inadequacy or bias, is described by a normally-distributed autoregressive process and accounts for structural deficiencies and errors in input measurement. The observation error E, is, instead, normally and independently distributed. Since urban watersheds are extremely responsive to precipitation events we modified this framework, making the bias input-dependent and transforming model results and data for residual variance stabilization. To show the improvement in uncertainty quantification we analyzed the response of a monitored stormwater system. We modeled the outlet discharge for several rain events by using a conceptual model. For comparison we computed the uncertainties with the traditional independent error model (e.g. Freni and Mannina, 2010). The quality of the prediction uncertainty bands were analyzed through residual diagnostics for the calibration phase and prediction coverage in the validation phase. The results of this study clearly show that input-dependent autocorrelated error model outperforms the independent residual representation. This is evident when comparing the fulfillment of the distribution assumptions of E. The bias error model produces realization of E that are much smaller (and so more realistic), less autocorrelated and heteroskedastic than with the current model. Furthermore, the proportion of validation data falling into the 95% credibility intervals is circa 15% higher accounting for bias than under the independence assumption. Our framework describing model bias appeared very promising in improving the fulfillment of the statistical assumptions and in decomposing predictive uncertainty. We believe that the proposed error model will be suitable for many applications because the computational expenses are only negligibly increased compared to the traditional approach. In future we will show how to use this approach with complex hydrodynamic models to further separate the effect structural deficits and input uncertainty. References P. Reichert and N. Schuwirth. 2012. Linking statistical bias description to multiobjective model calibration. Water Resources Research, 48, W09543, doi:10.1029/2011WR011391. G. Freni and G. Mannina. 2010. Bayesian approach for uncertainty quantification in water quality modelling: the influence of prior distribution. Journal of Hydrology, 392, 31-39, doi:10.1016/j.jhydrol.2010.07.043

Del Giudice, Dario; Reichert, Peter; Honti, Mark; Scheidegger, Andreas; Albert, Carlo; Rieckermann, Jörg

2013-04-01

344

Analysis of open-loop conical scan pointing error and variance estimators

NASA Technical Reports Server (NTRS)

General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

Alvarez, L. S.

1993-01-01

345

Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

NASA Technical Reports Server (NTRS)

In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

2006-01-01

346

Reliable Computing 2 (3) (1996), pp. 219-228 Fast error estimates for indirect

the time necessary to compute Y = f(xh...,xa). So, if an algorithm f is already time-consuming, error give pavement lifetime estimates. BbICTp0e0KeHHBaHHeII0rpeTTTH0CTel HeIIpMbIX If3MepeHI4X: zp a0meH m KJly~Hrb "rpe~yeMyloOtleHxy. CylIIeCTByR)IUHe a.~lrOpltTMM OL~eHHBaHILq il|WpelitH|lcrel~ (HHTepB3~bila~l Ma

Kearfott, R. Baker

347

Background We estimated sufficient setup margins for head-and-neck cancer (HNC) radiotherapy (RT) when 2D kV images are utilized for routine patient setup verification. As another goal we estimated a threshold for the displacements of the most important bony landmarks related to the target volumes requiring immediate attention. Methods We analyzed 1491 orthogonal x-ray images utilized in RT treatment guidance for 80 HNC patients. We estimated overall setup errors and errors for four subregions to account for patient rotation and deformation: the vertebrae C1-2, C5-7, the occiput bone and the mandible. Setup margins were estimated for two 2D image guidance protocols: i) imaging at first three fractions and weekly thereafter and ii) daily imaging. Two 2D image matching principles were investigated: i) to the vertebrae in the middle of planning target volume (PTV) (MID_PTV) and ii) minimizing maximal position error for the four subregions (MIN_MAX). The threshold for the position errors was calculated with two previously unpublished methods based on the van Herk’s formula and clinical data by retaining a margin of 5 mm sufficient for each subregion. Results Sufficient setup margins to compensate the displacements of the subregions were approximately two times larger than were needed to compensate setup errors for rigid target. Adequate margins varied from 2.7 mm to 9.6 mm depending on the subregions related to the target, applied image guidance protocol and early correction of clinically important systematic 3D displacements of the subregions exceeding 4 mm. The MIN_MAX match resulted in smaller margins but caused an overall shift of 2.5 mm for the target center. Margins???5mm were sufficient with the MID_PTV match only through application of daily 2D imaging and the threshold of 4 mm to correct systematic displacement of a subregion. Conclusions Adequate setup margins depend remarkably on the subregions related to the target volume. When the systematic 3D displacement of a subregion exceeds 4 mm, it is optimal to correct patient immobilization first. If this is not successful, adaptive replanning should be considered to retain sufficiently small margins. PMID:24020432

2013-01-01

348

We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method.

Hongzhi Wang; Sandhitsu R. Das; Jung Wook Suh; Murat Altinay; John Pluta; Caryne Craige; Brian Avants; Paul A. Yushkevich

2011-01-01

349

BackgroundPrevious research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.23 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have

Reema Sirriyeh; Rebecca Lawton; Peter Gardner; Gerry Armitage

2010-01-01

350

NASA Technical Reports Server (NTRS)

Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. The use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio is investigated.

Lakshminarayanan, M. Y.; Gunst, R. F.

1984-01-01

351

NASA Astrophysics Data System (ADS)

The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ~1400 deg2 will constrain the dark energy equation of the state parameter with an error of ?w 0 ~ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density \\Omega _m0 = 0.256+/- ^{0.054}_{0.046}.

Shirasaki, Masato; Yoshida, Naoki

2014-05-01

352

Estimation of the extrapolation error in the calibration of type S thermocouples

NASA Astrophysics Data System (ADS)

Measurement results from the calibration performed at NIST of ten new type S thermocouples have been analyzed to estimate the extrapolation error. Thermocouples have been calibrated at the fixed points of Zn, Al, Ag and Au and calibration curves were calculated using different numbers of FPs. It was found for these thermocouples that the absolute value of the extrapolation error, evaluated by measurement at the Au freezing-point temperature, is at most 0.10 °C and 0.27 °C when the fixed-points of Zn, Al and Ag, or the fixed-points of Zn and Al, are respectively used to calculate the calibration curve. It is also shown that absolute value of the extrapolation error, evaluated by measurement at the Ag freezing-point temperature is at most 0.25 °C when the fixed-points of Zn and Al, are used to calculate the calibration curve. This study is oriented to help those labs that lack a direct mechanism to achieve a high temperature calibration. It supports, up to 1064 °C, the application of a similar procedure to that used by Burns and Scroger in NIST SP-250-35 for calibrating a new type S thermocouple. The uncertainty amounts a few tenths of a degree Celsius.

Giorgio, P.; Garrity, K. M.; Rebagliati, M. Jiménez; García Skabar, J.

2013-09-01

353

NASA Astrophysics Data System (ADS)

and validation of geophysical measurement systems typically require knowledge of the "true" value of the target variable. However, the data considered to represent the "true" values often include their own measurement errors, biasing calibration, and validation results. Triple collocation (TC) can be used to estimate the root-mean-square-error (RMSE), using observations from three mutually independent, error-prone measurement systems. Here, we introduce Extended Triple Collocation (ETC): using exactly the same assumptions as TC, we derive an additional performance metric, the correlation coefficient of the measurement system with respect to the unknown target, ?t,Xi. We demonstrate that ?t,Xi2 is the scaled, unbiased signal-to-noise ratio and provides a complementary perspective compared to the RMSE. We apply it to three collocated wind data sets. Since ETC is as easy to implement as TC, requires no additional assumptions, and provides an extra performance metric, it may be of interest in a wide range of geophysical disciplines.

McColl, Kaighin A.; Vogelzang, Jur; Konings, Alexandra G.; Entekhabi, Dara; Piles, María.; Stoffelen, Ad

2014-09-01

354

Validation of Rain-Rate Estimation in Hurricanes from the Stepped Frequency Microwave Radiometer to radar rainfall estimates. Examination of the existing SFMR algorithm shows that the coefficient should. But an overall high bias of 5 mm h 1 of the SFMR rainfall estimate relative to radar is also found. The error

Jiang, Haiyan

355

Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

NASA Astrophysics Data System (ADS)

Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with the code Gauss2 [S. Gonzalez-Pinto, R. Rojas-Bello, Gauss2, a Fortran 90 code for second order initial value problems,

Calvo, M.; González-Pinto, S.; Montijano, J. I.

2008-09-01

356

Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

NASA Astrophysics Data System (ADS)

Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

2012-12-01

357

ERIC Educational Resources Information Center

The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

Paek, Insu; Cai, Li

2014-01-01

358

Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON, FRANCE; Vassilevski, Yuri [RUSSIA

2009-01-01

359

Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

Wilkening, Jon

2008-12-10

360

NASA Astrophysics Data System (ADS)

Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).

Lee, Y.; Keehm, Y.

2011-12-01

361

Atomic to Continuum Passage for Nanotubes: A Discrete Saint-Venant Principle and Error Estimates

NASA Astrophysics Data System (ADS)

We consider general infinite nanotubes of atoms in where each atom interacts with all the others through a two-body potential. At the equilibrium, the positions of the atoms satisfy a Euler-Lagrange equation. When there are no exterior forces and for a suitable geometry, a particular family of nanotubes is the set of perfect nanotubes at the equilibrium. When exterior forces are applied on the nanotube, we compare the nanotube to nanotubes of the previous family. In part I of the paper, this quantitative comparison is formulated in our first main result as a discrete Saint-Venant principle. As a corollary, we also give a Liouville classification result. Our Saint-Venant principle can be derived for a large class of potentials (including the Lennard-Jones potential), when the perfect nanotubes at the equilibrium are stable. The approach is designed to be applicable to nanotubes that can have general shapes like, for instance, carbon nanotubes or DNA, under the oversimplified assumption that all the atoms are identical. In part II of the paper, we derive from our Saint-Venant principle a macroscopic mechanical model for general nanotubes. We prove that every solution is well approximated by the solution of a continuum model involving stretching and twisting, but no bending. We establish error estimates between the discrete and the continuous solution. More precisely we give two error estimates: one at the microscopic level and one at the macroscopic level.

El Kass, D.; Monneau, R.

2014-07-01

362

The estimation of calibration equations for variables with heteroscedastic measurement errors.

In clinical chemistry and medical research, there is often a need to calibrate the values obtained from an old or discontinued laboratory procedure to the values obtained from a new or currently used laboratory method. The objective of the calibration study is to identify a transformation that can be used to convert the test values of one laboratory measurement procedure into the values that would be obtained using another measurement procedure. However, in the presence of heteroscedastic measurement error, there is no good statistical method available for estimating the transformation. In this paper, we propose a set of statistical methods for a calibration study when the magnitude of the measurement error is proportional to the underlying true level. The corresponding sample size estimation method for conducting a calibration study is discussed as well. The proposed new method is theoretically justified and evaluated for its finite sample properties via an extensive numerical study. Two examples based on real data are used to illustrate the procedure. Copyright © 2014 John Wiley & Sons, Ltd. PMID:24935784

Tian, Lu; Durazo-Arvizu, Ramón A; Myers, Gary; Brooks, Steve; Sarafin, Kurtis; Sempos, Christopher T

2014-11-01

363

A model of time estimation and error feedback in predictive timing behavior.

Two key features of sensorimotor prediction are preprogramming and adjusting of performance based on previous experience. Oculomotor tracking of alternating visual targets provides a simple paradigm to study this behavior in the motor system; subjects make predictive eye movements (saccades) at fast target pacing rates (>0.5 Hz). In addition, the initiation errors (latencies) during predictive tracking are correlated over a small temporal window (correlation window) suggesting that tracking performance within this time range is used in the feedback process of the timing behavior. In this paper, we propose a closed-loop model of this predictive timing. In this model, the timing between movements is based on an internal estimation of stimulus timing (an internal clock), which is represented by a (noisy) signal integrated to a threshold. The threshold of the integrate-to-fire mechanism is determined by the timing between movements made within the correlation window of previous performance and adjusted by feedback of recent and projected initiation error. The correlation window size increases with repeated tracking and was estimated by two independent experiments. We apply the model to several experimental paradigms and show that it produces data specific to predictive tracking: a gradual shift from reaction to prediction on initial tracking, phase transition and hysteresis as pacing frequency changes, scalar property, continuation of predictive tracking despite perturbations, and intertrial correlations of a specific form. These results suggest that the process underlying repetitive predictive motor timing is adjusted by the performance and the corresponding errors accrued over a limited time range and that this range increases with continued confidence in previous performance. PMID:18563546

Joiner, Wilsaan M; Shelhamer, Mark

2009-02-01

364

We propose a simple and unified approach for a posteriori error estimation and adaptive mesh refinement in finite element analysis using multiresolution signal processing principles. Given a sequence of nested discretizations ...

Sudarshan, Raghunathan, 1978-

2005-01-01

365

Consistency & Numerical Smoothing Error Estimation --An Alternative of the Lax-Richtmyer Theorem

indicator remains bounded; (3) it allows error propagation to be analyzed with differential equations stability Numerical stability is about how a numerical scheme propagates error. Figure 1 gives an expla;obtained by using a numerical scheme. The error at tn+1 is split into local error and propagation error

Sun, Tong

366

The purpose of this article is to carry out a power-spectrum analysis (based on likelihood methods) of the Super-Kamiokande 5-day dataset that takes account of the asymmetry in the error estimates. Whereas the likelihood analysis involves a linear optimization procedure for symmetrical error estimates, it involves a nonlinear optimization procedure for asymmetrical error estimates. We find that for most frequencies there is little difference between the power spectra derived from analyses of symmetrized error estimates and from asymmetrical error estimates. However, this proves not to be the case for the principal peak in the power spectra, which is found at 9.43 yr-1. A likelihood analysis which allows for a "floating offset" and takes account of the start time and end time of each bin and of the flux estimate and the symmetrized error estimate leads to a power of 11.24 for this peak. A Monte Carlo analysis shows that there is a chance of only 1% of finding a peak this big or bigger in the frequency band 1 - 36 yr-1 (the widest band that avoids artificial peaks). On the other hand, an analysis that takes account of the error asymmetry leads to a peak with power 13.24 at that frequency. A Monte Carlo analysis shows that there is a chance of only 0.1% of finding a peak this big or bigger in that frequency band 1 - 36 yr-1. From this perspective, power spectrum analysis that takes account of asymmetry of the error estimates gives evidence for variability that is significant at the 99.9% level. We comment briefly on an apparent discrepancy between power spectrum analyses of the Super-Kamiokande and SNO solar neutrino experiments.

P. A. Sturrock; J. D. Scargle

2006-01-30

367

We provide a unified description of efficiency correction and error estimation for moments of conserved quantifies distributions in heavy-ion collisions. Moments and cumulants are expressed in terms of factorial moments, which can be easily corrected for the efficiency effect. By deriving the covariance between factorial moments, we obtain the error formula for efficiency corrected moments based on error propagation derived from the Delta theorem. Monto Carlo simulation based on skellam distribution shows the obtained errors can well reflect the statistical fluctuations of the efficiency corrected moments while the bootstrap method overestimates the statistical uncertainties when the efficiency is small.

Luo, Xiaofeng

2014-01-01

368

We have developed a computerized method for estimating patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy. The patient setup errors were estimated based on a template-matching technique that compared the portal image and a localized pelvic template image with a clinical target volume produced from a digitally reconstructed radiography (DRR) image of each patient. We evaluated the proposed method by calculating the residual error between the patient setup error obtained by the proposed method and the gold standard setup error determined by consensus between two radiation oncologists. Eleven training cases with prostate cancer were used for development of the proposed method, and then we applied the method to 10 test cases as a validation test. As a result, the residual errors in the anterior–posterior, superior–inferior and left–right directions were smaller than 2 mm for the validation test. The mean residual error was 2.65 ± 1.21 mm in the Euclidean distance for training cases, and 3.10 ± 1.49 mm for the validation test. There was no statistically significant difference in the residual error between the test for training cases and the validation test (P = 0.438). The proposed method appears to be robust for detecting patient setup error in the treatment of prostate cancer radiotherapy. PMID:22843375

Arimura, Hidetaka; Itano, Wataru; Shioyama, Yoshiyuki; Matsushita, Norimasa; Magome, Taiki; Yoshitake, Tadamasa; Anai, Shigeo; Nakamura, Katsumasa; Yoshidome, Satoshi; Yamagami, Akihiko; Honda, Hiroshi; Ohki, Masafumi; Toyofuku, Fukai; Hirata, Hideki

2012-01-01

369

NASA Astrophysics Data System (ADS)

The spatial distribution of precipitation is a crucial issue for many hydrological investigations, such as the estimation of flash floods, landsliding, etc. The structure of precipitation in high resolution space and time scales is practically impossible to be recorded by rain gages, especially in mountainous areas where the gage network density is usually very low. For this reason the investigation of the structure of precipitation in space and time using remote sensing and especially weather radars is a more promising alternative. In fact, a significant research effort has recently been devoted to the construction of spatio-temporal stochastic models that can capture the observed behavior from radar-derived estimates of precipitation. However, relatively little attention has been given to the fact that those estimates can be highly unreliable which may have severe consequences on model parameterization. In this study we show the influence of measurement error in radar-derived precipitation on the most widely used two-dimensional scaling estimators. The objective of our study is to quantify the effects of three main problems with radar-derived precipitation: noise contamination, clutter (e.g. due to anomalous propagation of the signal) and the issue of logarithmically binned intensity classes in which precipitation data are usually stored and delivered. We first illustrate the measurement problem by comparing two different operational weather radar products of MeteoSwiss (RAIN, NASS) based on different noise filtering techniques. We quantify the discrepancies in scaling estimators from radar precipitation with a Monte Carlo framework. We simulate random spatial fields using three of the most widely used stochastic models for spatial precipitation: discrete multiplicative random cascades, universal multifractals and a spectral model. We reconstruct the simulated fields including noise contamination, clutter and binning, and we quantify the differences between theoretical and observed scaling estimators. The main aim of the study is to identify the most robust scaling estimators that would lead to reliable stochastic models. Our main results are that multifractal estimators such as moment scaling exponents can be seriously affected by the problems connected to radar precipitation fields, while spectral estimators are more robust and insensitive to the problems of noise and the class-binning quantization.

Molnar, P.; Paschalis, A.; Burlando, P.

2011-12-01

370

Intermediate-mass-ratio inspirals in the Einstein Telescope. II. Parameter estimation errors

We explore the precision with which the Einstein Telescope will be able to measure the parameters of intermediate-mass-ratio inspirals, i.e., the inspirals of stellar mass compact objects into intermediate-mass black holes (IMBHs). We calculate the parameter estimation errors using the Fisher Matrix formalism and present results of Monte Carlo simulations of these errors over choices for the extrinsic parameters of the source. These results are obtained using two different models for the gravitational waveform which were introduced in paper I of this series. These two waveform models include the inspiral, merger, and ringdown phases in a consistent way. One of the models, based on the transition scheme of Ori and Thorne [A. Ori and K. S. Thorne, Phys. Rev. D 62, 124022 (2000)], is valid for IMBHs of arbitrary spin; whereas, the second model, based on the effective-one-body approach, has been developed to cross-check our results in the nonspinning limit. In paper I of this series, we demonstrated the excellent agreement in both phase and amplitude between these two models for nonspinning black holes, and that their predictions for signal-to-noise ratios are consistent to within 10%. We now use these waveform models to estimate parameter estimation errors for binary systems with masses 1.4M{sub {circle_dot}}+100M{sub {circle_dot}}, 10M{sub {circle_dot}}+100M{sub {circle_dot}}, 1.4M{sub {circle_dot}}+500M{sub {circle_dot}}, and 10M{sub {circle_dot}}+500M{sub {circle_dot}} and various choices for the spin of the central IMBH. Assuming a detector network of three Einstein Telescopes, the analysis shows that for a 10M{sub {circle_dot}} compact object inspiralling into a 100M{sub {circle_dot}} IMBH with spin q=0.3, detected with a signal-to-noise ratio of 30, we should be able to determine the compact object and IMBH masses, and the IMBH spin magnitude to fractional accuracies of {approx}10{sup -3}, {approx}10{sup -3.5}, and {approx}10{sup -3}, respectively. We also expect to determine the location of the source in the sky and the luminosity distance to within {approx}0.003 steradians and {approx}10%, respectively. We also compute results for several different possible configurations of the detector network to assess how the precision of parameter determination depends on the network configuration.

Huerta, E. A.; Gair, Jonathan R. [Institute of Astronomy, Madingley Road, CB3 0HA Cambridge (United Kingdom)

2011-02-15

371

A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has benefits in educating and engaging the public in science, but as demonstrated here, can also provide accurate estimates of biomass or forest carbon stocks. PMID:23865241

Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

2013-06-01

372

An agent-based modelling approach to estimate error in gyrodactylid population growth.

Comparative studies of gyrodactylid monogeneans on different host species or strains rely upon the observation of growth on individual fish maintained within a common environment, summarised using maximum likelihood statistical approaches. Here we describe an agent-based model of gyrodactylid population growth, which we use to evaluate errors due to stochastic reproductive variation in such experimental studies. Parameters for the model use available fecundity and mortality data derived from previously published life tables of Gyrodactylus salaris, and use a new data set of fecundity and mortality statistics for this species on the Neva stock of Atlantic salmon, Salmo salar. Mortality data were analysed using a mark-recapture analysis software package, allowing maximum-likelihood estimation of daily survivorship and mortality. We consistently found that a constant age-specific mortality schedule was most appropriate for G. salaris in experimental datasets, with a daily survivorship of 0.84 at 13°C. This, however, gave unrealistically low population growth rates when used as parameters in the model, and a schedule of constantly increasing mortality was chosen as the best compromise for the model. The model also predicted a realistic age structure for the simulated populations, with 0.32 of the population not yet having given birth for the first time (pre-first birth). The model demonstrated that the population growth rate can be a useful parameter for comparing gyrodactylid populations when these are larger than 20-30 individuals, but that stochastic error rendered the parameter unusable in smaller populations. It also showed that the declining parasite population growth rate typically observed during the course of G. salaris infections cannot be explained through stochastic error and must therefore have a biological basis. Finally, the study showed that most gyrodactylid-host studies of this type are too small to detect subtle differences in local adaptation of gyrodactylid monogeneans between fish stocks. PMID:22771983

Ramírez, Raúl; Harris, Philip D; Bakke, Tor A

2012-08-01

373

Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331

Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred

2013-01-01

374

Wavelet-generalized least squares: a new BLU estimator of linear regression models with 1/f errors.

Long-memory noise is common to many areas of signal processing and can seriously confound estimation of linear regression model parameters and their standard errors. Classical autoregressive moving average (ARMA) methods can adequately address the problem of linear time invariant, short-memory errors but may be inefficient and/or insufficient to secure type 1 error control in the context of fractal or scale invariant noise with a more slowly decaying autocorrelation function. Here we introduce a novel method, called wavelet-generalized least squares (WLS), which is (to a good approximation) the best linear unbiased (BLU) estimator of regression model parameters in the context of long-memory errors. The method also provides maximum likelihood (ML) estimates of the Hurst exponent (which can be readily translated to the fractal dimension or spectral exponent) characterizing the correlational structure of the errors, and the error variance. The algorithm exploits the whitening or Karhunen-Loéve-type property of the discrete wavelet transform to diagonalize the covariance matrix of the errors generated by an iterative fitting procedure after both data and design matrix have been transformed to the wavelet domain. Properties of this estimator, including its Cramèr-Rao bounds, are derived theoretically and compared to its empirical performance on a range of simulated data. Compared to ordinary least squares and ARMA-based estimators, WLS is shown to be more efficient and to give excellent type 1 error control. The method is also applied to some real (neurophysiological) data acquired by functional magnetic resonance imaging (fMRI) of the human brain. We conclude that wavelet-generalized least squares may be a generally useful estimator of regression models in data complicated by long-memory or fractal noise. PMID:11771991

Fadili, M J; Bullmore, E T

2002-01-01

375

NASA Astrophysics Data System (ADS)

One of the most fundamental challenges in predictive modeling and simulation involving materials is quantifying and minimizing the errors that originate from the use of approximate constitutive laws (with uncertain parameters and/or model form). We propose to use functional derivatives of the quantity of interest (QoI) with respect to the input constitutive laws to quantify how the QoI depends on the entire input functions as opposed to its parameters as is common practice. This functional sensitivity can be used to (i) quantify the prediction uncertainty originating from uncertainties in the input functions; (ii) compute a first-order correction to the QoI when a more accurate constitutive law becomes available, and (iii) rank possible high-fidelity simulations in terms of the expected reduction in the error of the predicted QoI. We demonstrate the proposed approach with two examples involving solid mechanics where linear elasticity is used as the low-fidelity constitutive law and a materials model including non-linearities is used as the high-fidelity law. These examples show that functional uncertainty quantification not only provides an exact correction to the coarse prediction if the high-fidelity model is completely known but also a high-accuracy estimate of the correction with only a few evaluations of the high-fidelity model. The proposed approach is generally applicable and we foresee it will be useful to determine where and when high-fidelity information is required in predictive simulations.

Strachan, Alejandro; Mahadevan, Sankaran; Hombal, Vadiraj; Sun, Lin

2013-09-01

376

The model of mismatch negativity (MMN) as a simple index of change detection has been superseded by a richer understanding of how this event-related potential (ERP) reflects the representation of the sound environment in the brain. Our conceptualization of why the MMN is altered in certain groups must also evolve along with a better understanding of the activities reflected by this component. The detection of change incorporates processes enabling an automatic registration of "sameness", a memory for such regularities and the application of this recent acoustic context to interpreting the present and future state of the environment. It also includes "weighting" the importance of this change to an organism's behaviour. In this light, the MMN has been considered a prediction error signal that occurs when the brain detects that the present state of the world violates a context-driven expectation about the environment. In this paper we revisit the consistent observation of reduced MMN amplitude in patients with schizophrenia. We review existing data to address whether the apparent deficit might reflect problems in prediction error generation, estimation or salience. Possible interpretations of MMN studies in schizophrenia are linked to dominant theories about the neurobiology of the illness. PMID:22020271

Todd, Juanita; Michie, Patricia T; Schall, Ulrich; Ward, Philip B; Catts, Stanley V

2012-02-01

377

Background A widely-used approach for screening nuclear DNA markers is to obtain sequence data and use bioinformatic algorithms to estimate which two alleles are present in heterozygous individuals. It is common practice to omit unresolved genotypes from downstream analyses, but the implications of this have not been investigated. We evaluated the haplotype reconstruction method implemented by PHASE in the context of phylogeographic applications. Empirical sequence datasets from five non-coding nuclear loci with gametic phase ascribed by molecular approaches were coupled with simulated datasets to investigate three key issues: (1) haplotype reconstruction error rates and the nature of inference errors, (2) dataset features and genotypic configurations that drive haplotype reconstruction uncertainty, and (3) impacts of omitting unresolved genotypes on levels of observed phylogenetic diversity and the accuracy of downstream phylogeographic analyses. Results We found that PHASE usually had very low false-positives (i.e., a low rate of confidently inferring haplotype pairs that were incorrect). The majority of genotypes that could not be resolved with high confidence included an allele occurring only once in a dataset, and genotypic configurations involving two low-frequency alleles were disproportionately represented in the pool of unresolved genotypes. The standard practice of omitting unresolved genotypes from downstream analyses can lead to considerable reductions in overall phylogenetic diversity that is skewed towards the loss of alleles with larger-than-average pairwise sequence divergences, and in turn, this causes systematic bias in estimates of important population genetic parameters. Conclusions A combination of experimental and computational approaches for resolving phase of segregating sites in phylogeographic applications is essential. We outline practical approaches to mitigating potential impacts of computational haplotype reconstruction on phylogeographic inferences. With targeted application of laboratory procedures that enable unambiguous phase determination via physical isolation of alleles from diploid PCR products, relatively little investment of time and effort is needed to overcome the observed biases. PMID:20429950

2010-01-01

378

We investigated whether errors occur in the estimation of ovine maternal-fetal glucose (Glc) kinetics using the isotope dilution technique when the Glc pool is rapidly expanded by exogenous (protocol A) or endogenous (protocol C) Glc entry and sought possible solutions (protocol B). In protocol A (n = 8), after attaining steady-state Glc specific activity (SA) by (U-14C)glucose (period 1), infusion of Glc (period 2) predictably decreased Glc SA, whereas. (U-14C)glucose concentration unexpectedly rose from 7,208 +/- 367 (means +/- SE) in period 1 to 8,558 +/- 308 disintegrations/min (dpm) per ml in period 2 (P less than 0.01). Fetal endogenous Glc production (EGP) was negligible during period 1 (0.44 +/- 1.0), but yielded a physiologically impossible negative value of -2.1 +/- 0.72 mg.kg-1.min-1 during period 2. When the fall in Glc SA during Glc infusion was prevented by addition of (U-14C)glucose admixed with the exogenous Glc (protocol B; n = 7), EGP was no longer negative. In protocol C (n = 6), sequential infusions of four increasing doses of epinephrine serially decreased SA, whereas tracer Glc increased from 7,483 +/- 608 to 11,525 +/- 992 dpm/ml plasma (P less than 0.05), imposing an obligatory underestimation of EGP. Thus a tracer mixing problem leads to erroneous estimations of fetal Glc utilization and Glc production via the three-compartment model in sheep when the Glc pool is expanded exogenously or endogenously. These errors can be minimized by maintaining the Glc SA relatively constant.

Menon, R.K.; Bloch, C.A.; Sperling, M.A. (Children's Hospital Medical Center, Cincinnati, OH (USA))

1990-06-01

379

ERIC Educational Resources Information Center

This study investigates item parameter recovery, standard error estimates, and fit statistics yielded by the WINSTEPS program under the Rasch model and the rating scale model through Monte Carlo simulations. The independent variables were item response model, test length, and sample size. WINSTEPS yielded practically unbiased estimates for the…

Wang, Wen-Chung; Chen, Cheng-Te

2005-01-01

380

The Continuous Performance Test (CPT) paradigms, CPT-AX and CPT-Repeated Letter are two common CPT formats that have been used to provide an assessment of individual’s ability to sustain attention (Parasuraman, 1982). Previous research has shown that these two paradigms are likely to elicit specific types of errors such as commission errors (more likely in CPT-AX condition) and omission errors (more

Mark S Glafke

2010-01-01

381

Transgender populations in the United States have been impacted by the HIV\\/AIDS epidemic. This systematic review estimates\\u000a the prevalence of HIV infection and risk behaviors of transgender persons. Comprehensive searches of the US-based HIV behavioral\\u000a prevention literature identified 29 studies focusing on male-to-female (MTF) transgender women; five of these studies also\\u000a reported data on female-to-male (FTM) transgender men. Using meta-analytic

Jeffrey H. Herbst; Elizabeth D. Jacobs; Teresa J. Finlayson; Vel S. McKleroy; Mary Spink Neumann; Nicole Crepaz

2008-01-01

382

The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

2009-01-01

383

If a small "particle" of mass $\\mu M$ (with $\\mu \\ll 1$) orbits a Schwarzschild or Kerr black hole of mass $M$, the particle is subject to an $\\O(\\mu)$ radiation-reaction "self-force". Here I argue that it's valuable to compute this self-force highly accurately (relative error of $\\ltsim 10^{-6}$) and efficiently, and I describe techniques for doing this and for obtaining and validating error estimates for the computation. I use an adaptive-mesh-refinement (AMR) time-domain numerical integration of the perturbation equations in the Barack-Ori mode-sum regularization formalism; this is efficient, yet allows easy generalization to arbitrary particle orbits. I focus on the model problem of a scalar particle in a circular geodesic orbit in Schwarzschild spacetime. The mode-sum formalism gives the self-force as an infinite sum of regularized spherical-harmonic modes $\\sum_{\\ell=0}^\\infty F_{\\ell,\\reg}$, with $F_{\\ell,\\reg}$ (and an "internal" error estimate) computed numerically for $\\ell \\ltsim 30$ and estimated for larger~$\\ell$ by fitting an asymptotic "tail" series. Here I validate the internal error estimates for the individual $F_{\\ell,\\reg}$ using a large set of numerical self-force computations of widely-varying accuracies. I present numerical evidence that the actual numerical errors in $F_{\\ell,\\reg}$ for different~$\\ell$ are at most weakly correlated, so the usual statistical error estimates are valid for computing the self-force. I show that the tail fit is numerically ill-conditioned, but this can be mostly alleviated by renormalizing the basis functions to have similar magnitudes. Using AMR, fixed mesh refinement, and extended-precision floating-point arithmetic, I obtain the (contravariant) radial component of the self-force for a particle in a circular geodesic orbit of areal radius $r = 10M$ to within $1$~ppm relative error.

Jonathan Thornburg

2010-06-18

384

Deflections of the vertical introduce errors into terrestrial inertial navigation systems. In many situations, these errors prove to be intolerable. However, if vertical deflection data are available prior to a mission, this information can in principle be used to reduce the navigation system errors. It is assumed that measurements of vertical deflections plus noise are available at equally spaced points

1968-01-01

385

The Survey Department of Rijkswaterstaat in The Netherlands makes extensive use of laser scanning for topographic measurements. An inventory of sources of errors indicates that errors may vary from 5 to 200 cm. The experience shows that errors related to the laser instrument, GPS and INS may frequently occur, resulting in local distortions, and planimetric and height shifts. Moreover, the

E. J. Huising; L. M. Gomes Pereira

1998-01-01

386

The high mass measurement accuracy and precision available with recently developed mass spectrometers is increasingly used in proteomics analyses to confidently identify tryptic peptides from complex mixtures of proteins, as well as post-translational modifications and peptides from non-annotated proteins. To take full advantage of high mass measurement accuracy instruments it is necessary to limit systematic mass measurement errors. It is well known that errors in the measurement of m/z can be affected by experimental parameters that include e.g., outdated calibration coefficients, ion intensity, and temperature changes during the measurement. Traditionally, these variations have been corrected through the use of internal calibrants (well-characterized standards introduced with the sample being analyzed). In this paper we describe an alternative approach where the calibration is provided through the use of a priori knowledge of the sample being analyzed. Such an approach has previously been demonstrated based on the dependence of systematic error on m/z alone. To incorporate additional explanatory variables, we employed multidimensional, nonparametric regression models, which were evaluated using several commercially available instruments. The applied approach is shown to remove any noticeable biases from the overall mass measurement errors, and decreases the overall standard deviation of the mass measurement error distribution by 1.2- to 2-fold, depending on instrument type. Subsequent reduction of the random errors based on multiple measurements over consecutive spectra further improves accuracy and results in an overall decrease of the standard deviation by 1.8- to 3.7-fold. This new procedure will decrease the false discovery rates for peptide identifications using high accuracy mass measurements.

Petyuk, Vladislav A.; Jaitly, Navdeep; Moore, Ronald J.; Ding, Jie; Metz, Thomas O.; Tang, Keqi; Monroe, Matthew E.; Tolmachev, Aleksey V.; Adkins, Joshua N.; Belov, Mikhail E.; Dabney, Alan R.; Qian, Weijun; Camp, David G.; Smith, Richard D.

2008-02-01

387

NASA Astrophysics Data System (ADS)

This paper considers a spectrum sharing cognitive radio (CR) network consisting of one secondary user (SU) and one primary user (PU) in Rayleigh fading environments. The channel state information (CSI) between the secondary transmitter (STx) and the primary receiver (PRx) is assumed to be imperfect. Particularly, this CSI is assumed to be not only having channel estimation errors but also outdated due to feedback delay, which is different from existing work. We derive the closed-form expression for the outage capacity of the SU with this imperfect CSI under the average interference power constraint at the PU. Analytical results confirmed by simulations are presented to show the effect of the imperfect CSI. Particularly, it is shown that the outage capacity of the SU is robust to the channel estimation errors and feedback delay for low outage probability and high channel estimation errors and feedback delay.

Xu, D.; Feng, Z.; Zhang, P.

2013-04-01

388

Error handling strategies in multiphase inverse modeling

Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

Finsterle, S.; Zhang, Y.

2010-12-01

389

NASA Astrophysics Data System (ADS)

Estimates of CO2 fluxes that are based on atmospheric data rely upon a meteorological model to simulate atmospheric CO2 transport. These models provide a quantitative link between surface fluxes of CO2 and atmospheric measurements taken downwind. Therefore, any errors in the meteorological model can propagate into atmospheric CO2 transport and ultimately bias the estimated CO2 fluxes. These errors, however, have traditionally been difficult to characterize. To examine the effects of CO2 transport errors on estimated CO2 fluxes, we use a global meteorological model-data assimilation system known as "CAM-LETKF" to quantify two aspects of the transport errors: error variances (standard deviations) and temporal error correlations. Furthermore, we develop two case studies. In the first case study, we examine the extent to which CO2 transport uncertainties can bias CO2 flux estimates. In particular, we use a common flux estimate known as CarbonTracker to discover the minimum hypothetical bias that can be detected above the CO2 transport uncertainties. In the second case study, we then investigate which meteorological conditions may contribute to month-long biases in modeled atmospheric transport. We estimate 6 hourly CO2 transport uncertainties in the model surface layer that range from 0.15 to 9.6 ppm (standard deviation), depending on location, and we estimate an average error decorrelation time of ∼2.3 days at existing CO2 observation sites. As a consequence of these uncertainties, we find that CarbonTracker CO2 fluxes would need to be biased by at least 29%, on average, before that bias were detectable at existing non-marine atmospheric CO2 observation sites. Furthermore, we find that persistent, bias-type errors in atmospheric transport are associated with consistent low net radiation, low energy boundary layer conditions. The meteorological model is not necessarily more uncertain in these conditions. Rather, the extent to which meteorological uncertainties manifest as persistent atmospheric transport biases appears to depend, at least in part, on the energy and stability of the boundary layer. Existing CO2 flux studies may be more likely to estimate inaccurate regional fluxes under those conditions.

Miller, S. M.; Fung, I.; Liu, J.; Hayek, M. N.; Andrews, A. E.

2014-09-01

390

Use of Expansion Factors to Estimate the Burden of Dengue in Southeast Asia: A Systematic Analysis

Background Dengue virus infection is the most common arthropod-borne disease of humans and its geographical range and infection rates are increasing. Health policy decisions require information about the disease burden, but surveillance systems usually underreport the total number of cases. These may be estimated by multiplying reported cases by an expansion factor (EF). Methods and Findings As a key step to estimate the economic and disease burden of dengue in Southeast Asia (SEA), we projected dengue cases from 2001 through 2010 using EFs. We conducted a systematic literature review (1995–2011) and identified 11 published articles reporting original, empirically derived EFs or the necessary data, and 11 additional relevant studies. To estimate EFs for total cases in countries where no empirical studies were available, we extrapolated data based on the statistically significant inverse relationship between an index of a country's health system quality and its observed reporting rate. We compiled an average 386,000 dengue episodes reported annually to surveillance systems in the region, and projected about 2.92 million dengue episodes. We conducted a probabilistic sensitivity analysis, simultaneously varying the most important parameters in 20,000 Monte Carlo simulations, and derived 95% certainty level of 2.73–3.38 million dengue episodes. We estimated an overall EF in SEA of 7.6 (95% certainty level: 7.0–8.8) dengue cases for every case reported, with an EF range of 3.8 for Malaysia to 19.0 in East Timor. Conclusion Studies that make no adjustment for underreporting would seriously understate the burden and cost of dengue in SEA and elsewhere. As the sites of the empirical studies we identified were not randomly chosen, the exact extent of underreporting remains uncertain. Nevertheless, the results reported here, based on a systematic analysis of the available literature, show general consistency and provide a reasonable empirical basis to adjust for underreporting. PMID:23437407

Undurraga, Eduardo A.; Halasa, Yara A.; Shepard, Donald S.

2013-01-01

391

Measuring the Effect of Inter-Study Variability on Estimating Prediction Error

Background The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in “batch-effects”) and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies. Methods Here we quantify the impact of these combined “study-effects” on a disease signature’s predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance. Results As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification. Conclusions We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when “sufficient” diversity has been achieved for learning a molecular signature likely to translate without significant loss of accuracy to new clinical settings. PMID:25330348

Ma, Shuyi; Sung, Jaeyun; Magis, Andrew T.; Wang, Yuliang; Geman, Donald; Price, Nathan D.

2014-01-01

392

NASA Astrophysics Data System (ADS)

The stable isotopes of hydrogen and oxygen in water are commonly used to understand the present day hydrological cycle and reconstruct climates of the geologic past. Water vapor isotopes are also increasingly used to probe atmospheric dynamics, including the connection between tropical convection and polar precipitation, stratospheric intrusions of water vapor, boundary layer mixing and cloud formation [e.g., 1, 2, 3]. By utilizing this information, it is now possible to incorporate water isotopes into atmosphere-ocean general circulation models (AOGCMs) as a tool for generating "transfer functions" between a climate variable and its associated isotopic signal [4]. In so doing, we will improve our understanding of how, for example, water isotopes in polar or high-altitude ice cores and the oxygen isotopic composition of terrestrial carbonates are related to global or regional temperature change, seasonality in precipitation, or the intensity of precipitation, among other variables. However, these model-derived functions are only as good as the data, and parameterizations, that go into the models. Over the past five to ten years, the advent of commercially-available laser-based technologies, such as Cavity Ring-Down Spectroscopy (CRDS), has transformed the ease by which water isotopes are measured. This is particularly true for studies of ambient water vapor which are ideally-suited to the continuous flow capacity of CRDS analyzers. Measuring water vapor isotopes by CRDS requires no pre-treatment, discrete sampling or cryogenic trapping; instead water vapor isotopes can be measured in-situ to a high precision. For example, the Picarro L2130-i has a precision, defined as the standard deviation of 100-second average measurements, of better than 0.04 per mil and 0.1 per mil for d18O and dD at 12,500 ppm water vapor concentration, respectively. As a result, the quantity of water vapor isotope data has increased substantially. As with any analytical technique, challenges do exist when making ambient water vapor isotope measurements via CRDS and these should be addressed to ensure data integrity. Here we review a number of systematic errors introduced when making ambient water vapor measurements using CRDS, and where appropriate, provide suggestions for how to correct for them. These include: the dependence of reported delta values on water vapor concentration, the interference of CH4 on water spectra, achieving reliable low humidity measurements ([H2O] < 5,000 ppm), and calibration for both absolute accuracy and instrument drift. We will also demonstrate the relationship between calibration frequency and precision, and make recommendations for ongoing calibration and maintenance. Our aim is to improve the quality of data collected and support the continued use of water vapor isotope measurements by the research community. [1] Noone, D., Galewsky, J., et al. (2011), JGR, 116, D22113. [2] Galewsky, J., Rella, C., et al. (2011), GRL, 38, L17803. [3] Tremoy, G., Vimeux, F., et al. (2012), GRL, 39, L08805. [4] Sturm, C., Zhang, Q. and Noone, D. (2010), Clim. Past, 6, 115-129.

Dennis, Kate J.; Jacobson, Gloria

2014-05-01

393

Wrapper feature selection for small sample size data driven by complete error estimates.

This paper focuses on wrapper-based feature selection for a 1-nearest neighbor classifier. We consider in particular the case of a small sample size with a few hundred instances, which is common in biomedical applications. We propose a technique for calculating the complete bootstrap for a 1-nearest-neighbor classifier (i.e., averaging over all desired test/train partitions of the data). The complete bootstrap and the complete cross-validation error estimate with lower variance are applied as novel selection criteria and are compared with the standard bootstrap and cross-validation in combination with three optimization techniques - sequential forward selection (SFS), binary particle swarm optimization (BPSO) and simplified social impact theory based optimization (SSITO). The experimental comparison based on ten datasets draws the following conclusions: for all three search methods examined here, the complete criteria are a significantly better choice than standard 2-fold cross-validation, 10-fold cross-validation and bootstrap with 50 trials irrespective of the selected output number of iterations. All the complete criterion-based 1NN wrappers with SFS search performed better than the widely-used FILTER and SIMBA methods. We also demonstrate the benefits and properties of our approaches on an important and novel real-world application of automatic detection of the subthalamic nucleus. PMID:22472029

Macaš, Martin; Lhotská, Lenka; Bakstein, Eduard; Novák, Daniel; Wild, Ji?í; Sieger, Tomáš; Vostatek, Pavel; Jech, Robert

2012-10-01

394

Potential errors in body composition as estimated by whole body scintillation counting

Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts were highly correlated (r = 0.87, p < 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.

Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.; Sandstead, H.H.

1983-04-01

395

People who score low on a performance test overestimate their own performance relative to others, whereas high scorers slightly underestimate their own performance. J. Kruger and D. Dunning (1999) attributed these asymmetric errors to differences in metacognitive skill. A replication study showed no evidence for mediation effects for any of several candidate variables. Asymmetric errors were expected because of statistical regression and the general better-than-average (BTA) heuristic. Consistent with this parsimonious model, errors were no longer asymmetric when either regression or the BTA effect was statistically removed. In fact, high rather than low performers were more error prone in that they were more likely to neglect their own estimates of the performance of others when predicting how they themselves performed relative to the group. PMID:11831408

Krueger, Joachim; Mueller, Ross A

2002-02-01

396

Errors and parameter estimation in precipitation-runoff modeling 1. Theory.

Errors in complex conceptual precipitation-runoff models may be analyzed by placing them into a statistical framework. This amounts to treating the errors as random variables and defining the probabilistic structure of the errors. By using such a framework, a large array of techniques, of which have been presented in the statistical literature, becomes available to the modeler for quantifying and analyzing the various sources of error. A number of these techniques are reviewed in this paper, with special attention to the peculiarities of hydrologic models. -from Author

Troutman, B. M.

1985-01-01

397

It is well known that skin sea surface temperature (SSST) is different from bulk sea surface temperature (BSST) by a few tenths of a degree Celsius. However, the extent of the error associated with dry deposition (or uptake) estimation by using BSST is not well known. This study tries to conduct such an evaluation using the on-board observation data over

Yung-Yao Lan; Ben-Jei Tsuang; Noel Keenlyside; Shu-Lun Wang; Chen-Tung Arthur Chen; Bin-Jye Wang; Tsun-Hsien Liu

2010-01-01

398

We used the error propagation theory to calculate uncertainties in static formation temperature estimates in geothermal and petroleum wells from three widely used methods (line-source or Horner method; spherical and radial heat flow method; and cylindrical heat source method). Although these methods commonly use an ordinary least-squares linear regression model considered in this study, we also evaluated two variants of

Surendra P. Verma; Jorge Andaverde; E. Santoyo

2006-01-01

399

ERIC Educational Resources Information Center

Studied three procedures for estimating the standard errors of school passing rates using a generalizability theory model and considered the effects of student sample size. Results show that procedures differ in terms of assumptions about the populations from which students were sampled, and student sample size was found to have a large effect on…

Lee, Guemin; Fitzpatrick, Anne R.

2003-01-01

400

ERIC Educational Resources Information Center

Investigated empirically through post mortem item-examinee sampling were the relative merits of two alternative procedures for allocating items to subtests in multiple matrix sampling and the feasibility of using the jackknife in approximating standard errors of estimate. The results indicate clearly that a partially balanced incomplete block…

Shoemaker, David M.

401

In this paper, the tropical Pacific Ocean reduced gravity model is studied using the proper orthogonal decomposition (POD) technique of mixed finite element (MFE) method and an error estimate of POD approximate solution based on MFE method is derived. POD is a model reduction technique for the simulation of physical processes governed by partial differential equations, e.g., fluid flows or

Zhendong Luo; Jiang Zhu; Ruiwen Wang; I. M. Navon

2007-01-01

402

Sea level has been estimated for the last108 million years through backstripping of corehole data from the New Jersey and Delaware Coastal Plains. Inherent errors due to this method of calculating sea level are discussed, including uncertainties in ages, depth of deposition and the model used for tectonic subsidence. Problems arising from the two -dimensional aspects of subsidence and response

M. A. Kominz; J. V. Browning; K. G. Miller; P. J. Sugarman; S. Mizintseva; C. R. Scotese

2008-01-01

403

Error propagation equations for estimating the uncertainty in high-speed wind tunnel test results

Error propagation equations, based on the Taylor series model, are derived for the nondimensional ratios and coefficients most often encountered in high-speed wind tunnel testing. These include pressure ratio and coefficient, static force and moment coefficients, dynamic stability coefficients, and calibration Mach number. The error equations contain partial derivatives, denoted as sensitivity coefficients, which define the influence of free-steam Mach

1994-01-01

404

Stability and error analysis of the polarization estimation inverse problem for microbial fuel cells

NASA Astrophysics Data System (ADS)

Determining parameters which describe the performance of a microbial fuel cell requires the solution of an inverse problem. Two formulations have been presented in the literature: a convolutional approach or a direct quadrature approach. A complete study and analysis of the direct quadrature method, which leads to two systems for the unknown signal given measured complex data, known as the distribution function of relaxation times, is presented. A theoretical analysis justifies the minimal range of integration that is appropriate for the quadrature and suggests that the systems should be combined giving an overdetermined system that is not well posed but not as ill-posed as either system considered separately. All measures of ill-posedness support using the combined data when the level of error in both components of the complex measurements is equivalent. Tikhonov regularization for the filtered singular value and truncated singular value decomposition are used to find solutions of the underlying model system. Given such solutions the application requires the determination of the model parameters that define the signal, among which are the location and peaks of the individual processes of the cell. A nonlinear data fitting approach is presented which consistently estimates these parameters. Simulations support the use of the combined systems for finding the underlying distribution function of relaxation times and the subsequent nonlinear data fitting to these curves. The approach is also illustrated for measured practical data, demonstrating that without the theoretical analysis incorrect conclusions on the underlying physical system would arise. This work justifies the use of Tikhonov regularization combined with nonlinear data fitting for finding reliable solutions for the specific model, when the signal is comprised of a mixture of signals from a small number of processes.

Renaut, R. A.; Baker, R.; Horst, M.; Johnson, C.; Nasir, D.

2013-04-01

405

Systematic parameter estimation in data-rich environments for cell signalling dynamics

Motivation: Computational models of biological signalling networks, based on ordinary differential equations (ODEs), have generated many insights into cellular dynamics, but the model-building process typically requires estimating rate parameters based on experimentally observed concentrations. New proteomic methods can measure concentrations for all molecular species in a pathway; this creates a new opportunity to decompose the optimization of rate parameters. Results: In contrast with conventional parameter