Optimal input design for aircraft instrumentation systematic error estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1991-01-01
A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.
Systematic errors in ground heat flux estimation and their correction
NASA Astrophysics Data System (ADS)
Gentine, P.; Entekhabi, D.; Heusinkveld, B.
2012-09-01
Incoming radiation forcing at the land surface is partitioned among the components of the surface energy balance in varying proportions depending on the time scale of the forcing. Based on a land-atmosphere analytic continuum model, a numerical land surface model, and field observations we show that high-frequency fluctuations in incoming radiation (with period less than 6 h, for example, due to intermittent clouds) are preferentially partitioned toward ground heat flux. These higher frequencies are concentrated in the 0-1 cm surface soil layer. Subsequently, measurements even at a few centimeters deep in the soil profile miss part of the surface soil heat flux signal. The attenuation of the high-frequency soil heat flux spectrum throughout the soil profile leads to systematic errors in both measurements and modeling, which require a very fine sampling near the soil surface (0-1 cm). Calorimetric measurement techniques introduce a systematic error in the form of an artificial band-pass filter if the temperature probes are not placed at appropriate depths. In addition, the temporal calculation of the change in the heat storage term of the calorimetric method can further distort the reconstruction of the surface soil heat flux signal. A correction methodology is introduced which provides practical application as well as insights into the estimation of surface soil heat flux and the closure of surface energy balance based on field measurements.
Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters
Köhlinger, F; Eriksen, M
2015-01-01
Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is signifi...
GREAT3 results I: systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A; Donnarumma, Annamaria; Conti, Ian Fenech; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep; Hogg, David W; Huff, Eric M; Jee, M James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C; Marshall, Philip J; Meyers, Joshua E; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Mboula, Fred Maurice Ngole; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D; Shan, Huanyuan; Sheldon, Erin S; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Adami, Kristian Zarb; Zhang, Jun; Zuntz, Joe
2014-01-01
We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically-varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially-varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety a...
Roger Barlow
2003-01-01
Asymmetric systematic errors arise when there is a non-linear dependence of a result on a nuisance parameter. Their combination is traditionally done by adding positive and negative deviations separately in quadrature. There is no sound justification for this, and it is shown that indeed it is sometimes clearly inappropriate. Consistent techniques are given for this combination of errors, and also
Roger Barlow
2003-06-18
Asymmetric systematic errors arise when there is a non-linear dependence of a result on a nuisance parameter. Their combination is traditionally done by adding positive and negative deviations separately in quadrature. There is no sound justification for this, and it is shown that indeed it is sometimes clearly inappropriate. Consistent techniques are given for this combination of errors, and also for evaluating $\\chi^2$, and for forming weighted sums.
GPS meteorology: Reducing systematic errors in geodetic estimates for zenith delay
Peng Fang; Michael Bevis; Yehuda Bock; Seth Gutman; Dan Wolfe
1998-01-01
Differences between long term precipitable water (PW) time series derived from radiosondes, microwave water vapor radiometers, and GPS stations reveal offsets that are often as much as 1-2 mm PW. All three techniques are thought to suffer from systematic errors of order 1 mm PW. Standard GPS processing algorithms are known to be sensitive to the choice of elevation cutoff
NASA Astrophysics Data System (ADS)
Zhukov, N. P.; Mainikova, N. F.; Rogov, I. V.; Antonov, A. O.
2014-07-01
We have considered the results of the analysis and estimation of systematic errors involved in the multimodel method for determining the thermophysical properties of solid materials. We have analytically obtained the condition that determines the upper limit of the reliably determinable thermal conductivity of the materials under study.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore »a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
NASA Astrophysics Data System (ADS)
Graziani, C.; Shirasaki, Y.; Donaghy, T.; Fenimore, E.; Galassi, M.; Kawai, N.; Lamb, D. Q.; Sakamoto, T.; Takahashi, D.; Tamagawa, T.; Tavenner, T.; Torii, K.; Yoshida, A.; Vanderspek, R.
2003-04-01
WXM gives GRB localizations in instrument coordinates. WXM localizations must be converted to celestial coordinates using spacecraft aspect information obtained by the optical cameras on HETE. We must therefore accurately determine the alignment of the WXM boresight with respect to that of the optical cameras, in order to accurately determine the celestial coordinates of WXM burst locations. We use a seven-parameter model that treats as free parameters the three Euler angles of a pure rotation, two horizontal shifts of the coded-aperture masks with respect to the detectors, and the heights of the masks above the two detectors. We determine the alignment by fitting the model to a set of 252 WXM localizations of Sco X-1 obtained between 23 April and 28 June 2001. We estimate the systematic error in WXM GRB locations by comparing the actual and the calculated locations of Sco X-1. We find that the systematic error corresponding to a 68.3% confidence region is 1.7', and the systematic error corresponding to a 90% confidence region is 2.4'. We find that this astrometric solution also provides a satisfactory fit to an independent sample of SGR and XRB events. These results are consistent with the astrometric calibration and the systematic error in WXM localizations derived independently using the RIKEN localization method.
Liang, Shunlin
of Systematic Errors of MODIS Thermal Infrared Bands Ronggao G. Liu, Jiyuan Y. Liu, and Shunlin Liang, Senior error in Moderate Resolution Imaging Spectroradiometer (MODIS) thermal infrared (TIR) Bands 2025 and 2736. There exist scan-to-scan overlapped pixels in MODIS data. By analyzing a sufficiently large amount of those
STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL
Rudnyi, Evgenii B.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: LINEAR ERROR MODEL E.B. Rudnyi Department of Chemistry://www.chem.msu.su/~rudnyi/welcome.html ABSTRACT A statistical model of systematic errors for processing results of a few experiments is pre)2 CONTENTS 1. Introduction 2. Notation and conventions 2.1. Physico-chemical model 2.2. Statistical
Estimating Bias Error Distributions
NASA Technical Reports Server (NTRS)
Liu, Tian-Shu; Finley, Tom D.
2001-01-01
This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.
Estimating GPS Positional Error
NSDL National Science Digital Library
Bill Witte
After instructing students on basic receiver operation, each student will make many (10-20) position estimates of 3 benchmarks over a week. The different benchmarks will have different views of the skies or vegetation cover. Each student will download their data into a spreadsheet and calculate horizontal and vertical errors which are collated into a class spreadsheet. The positions are sorted by error and plotted in a cumulative frequency plot. The students are encouraged to discuss the distribution, sources of error, and estimate confidence intervals. This exercise gives the students a gut feeling for confidence intervals and the accuracy of data. Students are asked to compare results from different types of data and benchmarks with different views of the sky. Uses online and/or real-time data Has minimal/no quantitative component
Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions
Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Rd. WT15, Richardson, Texas 75080 (United States)
2009-05-15
The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.
Leslie Wade; Jolien D. E. Creighton; Evan Ochsner; Benjamin D. Lackey; Benjamin F. Farr; Tyson B. Littenberg; Vivien Raymond
2014-02-20
Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.
Andrew Gould
1994-06-21
I present an analytic method for estimating the errors in fitting a distribution. A well-known theorem from statistics gives the minimum variance bound (MVB) for the uncertainty in estimating a set of parameters $\\l_i$, when a distribution function $F(z;\\l_1 ... \\l_m)$ is fit to $N$ observations of the quantity(ies) $z$. For example, a power-law distribution (of two parameters $A$ and $\\gaml$) is $F(z;A,\\gaml) = A z^{-\\gaml}$. I present the MVB in a form which is suitable for estimating uncertainties in problems of astrophysical interest. For many distributions, such as a power-law distribution or an exponential distribution in the presence of a constant background, the MVB can be evaluated in closed form. I give analytic estimates for the variances in several astrophysical problems including the gallium solar-neutrino experiments and the measurement of the polarization induced by a weak gravitational lens. I show that it is possible to make significant improvements in the accuracy of these experiments by making simple adjustments in how they are carried out or analyzed. The actual variance may be above the MVB because of the form of the distribution function and/or the number of observations. I present simple methods for recognizing when this occurs and for obtaining a more accurate estimate of the variance than the MVB when it does.
Suppressing systematic control errors to high orders
NASA Astrophysics Data System (ADS)
Bažant, P.; Frydrych, H.; Alber, G.; Jex, I.
2015-08-01
Dynamical decoupling is a powerful method for protecting quantum information against unwanted interactions with the help of open-loop control pulses. Realistic control pulses are not ideal and may introduce additional systematic errors. We introduce a class of self-stabilizing pulse sequences capable of suppressing such systematic control errors efficiently in qubit systems. Embedding already known decoupling sequences into these self-stabilizing sequences offers powerful means to achieve robustness against unwanted external perturbations and systematic control errors. As these self-stabilizing sequences are based on single-qubit operations, they offer interesting perspectives for future applications in quantum information processing.
Error correction in adders using systematic subcodes.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.
1972-01-01
A generalized theory is presented for the construction of a systematic subcode for a given AN code in such a way that error control properties of the AN code are preserved in this new code. The 'systematic weight' and 'systematic distance' functions in this new code depend not only on its number representation system but also on its addition structure. Finally, to illustrate this theory, a simple error-correcting adder organization using a systematic subcode of 29 N code is sketched in some detail.
Quantifying and reporting uncertainty from systematic errors.
Phillips, Carl V
2003-07-01
Optimal use of epidemiologic findings in decision making requires more information than standard analyses provide. It requires calculating and reporting the total uncertainty in the results, which in turn requires methods for quantifying the uncertainty introduced by systematic error. Quantified uncertainty can improve policy and clinical decisions, better direct further research, and aid public understanding, and thus enhance the contributions of epidemiology. The error quantification approach proposed here is based on estimating a probability distribution for a bias-corrected effect measure based on externally-derived distributions of bias levels. Using Monte Carlo simulation, corrections for multiple biases are combined by identifying the steps through which true causal effects become data, and (in reverse order) correcting for the errors introduced by each step. The bias-correction calculations are the same as those used in sensitivity analysis, but the resulting distribution of possible true values is more than a sensitivity analysis; it is a more complete reporting of the actual study results. The approach is illustrated with an application to a recent study that resulted in the drug, phenylpropanolamine, being removed from the market. PMID:12843772
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Antenna pointing systematic error model derivations
NASA Technical Reports Server (NTRS)
Guiar, C. N.; Lansing, F. L.; Riggs, R.
1987-01-01
The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.
Mars gravitational field estimation error
NASA Technical Reports Server (NTRS)
Compton, H. R.; Daniels, E. F.
1972-01-01
The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.
Exploring Systematic Error With Digital Video
NASA Astrophysics Data System (ADS)
Klassen, M. A. H.; Bloom, P. C.
2006-12-01
Digital video acquisition and analysis has become user-friendly and affordable with the advent of webcams and software such as Vernier’s LoggerPro. The analysis of the ballistic trajectory of a ball has been a fixture in the physics lab curriculum at Swarthmore College for many years. When we updated the experiment to use digital video rather than stroboscopic photography, the data acquisition and analysis became almost trivial. We realized this gave us the opportunity to use an experiment with very precise data and a well-defined expected result (the acceleration due to gravity, g) to teach about systematic error and ways to correct for it. In this paper we describe modifications to the lab procedure that demonstrate systematic errors due to calibration, perspective effects, and air drag.
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
Systematic errors in strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci Lin; Sharon, Keren; Bayliss, Matthew B.
2015-08-01
The lensing community has made great strides in quantifying the statistical errors associated with strong lens modeling. However, we are just now beginning to understand the systematic errors. Quantifying these errors is pertinent to Frontier Fields science, as number counts and luminosity functions are highly sensitive to the value of the magnifications of background sources across the entire field of view. We are aware that models can be very different when modelers change their assumptions about the parameterization of the lensing potential (i.e., parametric vs. non-parametric models). However, models built while utilizing a single methodology can lead to inconsistent outcomes for different quantities, distributions, and qualities of redshift information regarding the multiple images used as constraints in the lens model. We investigate how varying the number of multiple image constraints and available redshift information of those constraints (ex., spectroscopic vs. photometric vs. no redshift) can influence the outputs of our parametric strong lens models, specifically, the mass distribution and magnifications of background sources. We make use of the simulated clusters by M. Meneghetti et al. and the first two Frontier Fields clusters, which have a high number of multiply imaged galaxies with spectroscopically-measured redshifts (or input redshifts, in the case of simulated clusters). This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies, and are more prone to these systematic errors.
Odometry Error Covariance Estimation for Two Wheel Robot Vehicles
Sekercioglu, Y. Ahmet
Odometry Error Covariance Estimation for Two Wheel Robot Vehicles Lindsay KLEEMAN Intelligent of accuracy. The model assumes that wheel distance measurement errors are exclusively random zero mean white noise. Systematic errors due to wheel radius and wheel base measurement are ignored, since these can
Quantum error correction of systematic errors using a quantum search framework
Lov K. Grover; Ben W. Reichardt
2005-01-01
Composite pulses are a quantum control technique for canceling out systematic control errors. We present a different composite pulse sequence inspired by quantum search. Our technique can correct a wider variety of systematic errors--including, for example, nonlinear over-rotational errors--than previous techniques. Concatenation of the pulse sequence can reduce a systematic error to an arbitrarily small level.
Quantum error correction of systematic errors using a quantum search framework
Reichardt, Ben W.; Grover, Lov K. [EECS Department, Computer Science Division, University of California, Berkeley, California 94720 (United States); Bell Laboratories, Lucent Technologies, 600-700 Mountain Avenue, Murray Hill, New Jersey 07974 (United States)
2005-10-15
Composite pulses are a quantum control technique for canceling out systematic control errors. We present a different composite pulse sequence inspired by quantum search. Our technique can correct a wider variety of systematic errors--including, for example, nonlinear over-rotational errors--than previous techniques. Concatenation of the pulse sequence can reduce a systematic error to an arbitrarily small level.
ERROR ESTIMATIONS FOR INDIRECT MEASUREMENTS
Kreinovich, Vladik
Chapter 1 ERROR ESTIMATIONS FOR INDIRECT MEASUREMENTS: RANDOMIZED VS. DETERMINISTIC ALGORITHMS difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot
ERROR ESTIMATIONS FOR INDIRECT MEASUREMENTS
Kreinovich, Vladik
Chapter 1 ERROR ESTIMATIONS FOR INDIRECT MEASUREMENTS: RANDOMIZED VS. DETERMINISTIC ALGORITHMS difficult or even impossible to directly measure the quantity in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot
Medication errors in mental healthcare: a systematic review
Maidment, Ian D; Lelliott, Paul; Paton, Carol
2006-01-01
Background It has been estimated that medication error harms 1–2% of patients admitted to general hospitals. There has been no previous systematic review of the incidence, cause or type of medication error in mental healthcare services. Methods A systematic literature search for studies that examined the incidence or cause of medication error in one or more stage(s) of the medication?management process in the setting of a community or hospital?based mental healthcare service was undertaken. The results in the context of the design of the study and the denominator used were examined. Results All studies examined medication management processes, as opposed to outcomes. The reported rate of error was highest in studies that retrospectively examined drug charts, intermediate in those that relied on reporting by pharmacists to identify error and lowest in those that relied on organisational incident reporting systems. Only a few of the errors identified by the studies caused actual harm, mostly because they were detected and remedial action was taken before the patient received the drug. The focus of the research was on inpatients and prescriptions dispensed by mental health pharmacists. Conclusion Research about medication error in mental healthcare is limited. In particular, very little is known about the incidence of error in non?hospital settings or about the harm caused by it. Evidence is available from other sources that a substantial number of adverse drug events are caused by psychotropic drugs. Some of these are preventable and might probably, therefore, be due to medication error. On the basis of this and features of the organisation of mental healthcare that might predispose to medication error, priorities for future research are suggested. PMID:17142588
Medication Errors in the Southeast Asian Countries: A Systematic Review
Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui
2015-01-01
Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679
More on Systematic Error in a Boyle's Law Experiment
NASA Astrophysics Data System (ADS)
McCall, Richard P.
2012-01-01
A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
Systematic and random errors in electronic speckle photography.
Sjödahl, M; Benckert, L R
1994-11-01
Electronic speckle photography offers a simple and fast technique for measuring in-plane displacement fields in solid and fluid mechanics. Errors from undersampling, illumination divergence, and displacement magnitude have been analyzed and measured. The nature of the systematic error is such that a drift toward the closest integral pixel value is introduced. Because of the finite extent of the sensor area, considerable undersampling is tolerable before systematic errors occur. The random errors are mainly dependent on the effective ƒ-number of the imaging system and speckle decorrelation introduced by object displacement. When sampling at a rate of ~ 70% of the Nyquist frequency, we avoided systematic errors and minimized random errors. PMID:20941310
Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.
Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith
2013-09-01
Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory. PMID:23967914
Error estimation for ORION baseline vector determination
NASA Astrophysics Data System (ADS)
Wu, S. C.
1980-06-01
Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.
Robust Estimation Variable Equation Condition Error
Jefferys, William
hand as a function of ``independent variables,'' parameters, the right. Thus, example, Huber (1981Robust Estimation Variable Equation Condition Error by William Jefferys Department Astronomy University Texas at Austin SUMMARY a squares adjustment more than variable in equation condition error
Concatenated composite pulses compensating simultaneous systematic errors
Masamitsu Bando; Tsubasa Ichikawa; Yasushi Kondo; Mikio Nakahara
2012-11-06
In NMR experiments and quantum computation, many pulse (quantum gate) sequences called the composite pulses, were developed to suppress one of two dominant errors; a pulse length error and an off-resonance error. We describe, in this paper, a general prescription to design a single-qubit concatenated composite pulse (CCCP) that is robust against two types of errors simultaneously. To this end, we introduce a new property, which is satisfied by some composite pulses and is sufficient to obtain a CCCP. Then we introduce a general method to design CCCPs with shorter execution time and less number of pulses.
Improved Systematic Pointing Error Model for the DSN Antennas
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.
2011-01-01
New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.
Systematic t-Unidirectional Error-Detecting Codes over Zm
Bella Bose; Samir Elmougy; Luca G. Tallini
2007-01-01
Some new classes of systematic t-unidirectional error-detecting codes over Zm are designed. It is shown that the constructed codes can detect two errors using two check digits. Furthermore, the constructed codes can detect up to mr-2 + r-2 errors using r ges 3 check bits. A bound on the maximum number of detectable errors using r check digits is also
Adjoint Error Estimation for Linear Advection
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
A POSTERIORI ERROR ESTIMATION FOR ELLIPTIC EIGENPROBLEMS
KLAUS NEYMEYR
1999-01-01
An a posteriori error estimator is presented for a subspace implementation of preconditioned inverse iteration, which is the well-kno wn inverse iteration procedure, where the occurring system of linear equations is approximately solved by using a precon- ditioner. The error estimator is integrated in an adaptive m ultigrid algorithm to compute approximations of a modest number of the smallest eigenvalues
Systematic parameter errors in inspiraling neutron star binaries.
Favata, Marc
2014-03-14
The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276
Makarenkov, Vladimir
://chembank.med. harvard.edu/). This data bank has been maintained by the Institute for Chemistry and Cell Biology words: high-throughput screening, systematic error, background evaluation, trend-surface analysis showed examples of systematic trends in HTS plates; the trends of this kind are present in all plates
Sideridis, George D; Tsaousis, Ioannis; Katsis, Athanasios
2014-01-01
The purpose of the present studies was to test the effects of systematic sources of measurement error on the parameter estimates of scales using the Rasch model. Studies 1 and 2 tested the effects of mood and affectivity. Study 3 evaluated the effects of fatigue. Last, studies 4 and 5 tested the effects of motivation on a number of parameters of the Rasch model (e.g., ability estimates). Results indicated that (a) the parameters of interest and the psychometric properties of the scales were substantially distorted in the presence of all systematic sources of error, and, (b) the use of HGLM provides a way of adjusting the parameter estimates in the presence of these sources of error. It is concluded that validity in measurement requires a thorough evaluation of potential sources of error and appropriate adjustments based on each occasion. PMID:25232668
SIMEX and standard error estimation in semiparametric measurement error models.
Apanasovich, Tatiyana V; Carroll, Raymond J; Maity, Arnab
2009-01-01
SIMEX is a general-purpose technique for measurement error correction. There is a substantial literature on the application and theory of SIMEX for purely parametric problems, as well as for purely non-parametric regression problems, but there is neither application nor theory for semiparametric problems. Motivated by an example involving radiation dosimetry, we develop the basic theory for SIMEX in semiparametric problems using kernel-based estimation methods. This includes situations that the mismeasured variable is modeled purely parametrically, purely non-parametrically, or that the mismeasured variable has components that are modeled both parametrically and nonparametrically. Using our asymptotic expansions, easily computed standard error formulae are derived, as are the bias properties of the nonparametric estimator. The standard error method represents a new method for estimating variability of nonparametric estimators in semiparametric problems, and we show in both simulations and in our example that it improves dramatically on first order methods.We find that for estimating the parametric part of the model, standard bandwidth choices of order O(n(-1/5)) are sufficient to ensure asymptotic normality, and undersmoothing is not required. SIMEX has the property that it fits misspecified models, namely ones that ignore the measurement error. Our work thus also more generally describes the behavior of kernel-based methods in misspecified semiparametric problems. PMID:19609371
SIMEX and standard error estimation in semiparametric measurement error models
Apanasovich, Tatiyana V.; Carroll, Raymond J.; Maity, Arnab
2009-01-01
SIMEX is a general-purpose technique for measurement error correction. There is a substantial literature on the application and theory of SIMEX for purely parametric problems, as well as for purely non-parametric regression problems, but there is neither application nor theory for semiparametric problems. Motivated by an example involving radiation dosimetry, we develop the basic theory for SIMEX in semiparametric problems using kernel-based estimation methods. This includes situations that the mismeasured variable is modeled purely parametrically, purely non-parametrically, or that the mismeasured variable has components that are modeled both parametrically and nonparametrically. Using our asymptotic expansions, easily computed standard error formulae are derived, as are the bias properties of the nonparametric estimator. The standard error method represents a new method for estimating variability of nonparametric estimators in semiparametric problems, and we show in both simulations and in our example that it improves dramatically on first order methods. We find that for estimating the parametric part of the model, standard bandwidth choices of order O(n?1/5) are sufficient to ensure asymptotic normality, and undersmoothing is not required. SIMEX has the property that it fits misspecified models, namely ones that ignore the measurement error. Our work thus also more generally describes the behavior of kernel-based methods in misspecified semiparametric problems. PMID:19609371
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Bayes Error Rate Estimation using Classifier Ensembles
Tumer, Kagan
Bayes Error Rate Estimation using Classifier Ensembles Kagan Tumer NASA Ames Research Center MS 269 Engineering University of Texas, Austin, TX 78712-1084 ghosh@ece.utexas.edu Abstract The Bayes error rate classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches
Recent experiences with error estimation and adaptivity
Haque, Khalid Ansar
1991-01-01
files and Mr. Tim Westermann for giving me a copy of his thesis on error estimation This work was supported by the Texas Advanced Research / Technology Program under Contracts TATP - 70120 and TARP - 70410 through the Texas Engineering Experiment...
Error Estimates for Numerical Integration Rules
ERIC Educational Resources Information Center
Mercer, Peter R.
2005-01-01
The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.
Systematic errors in power measurements made with a dual six-port ANA
NASA Astrophysics Data System (ADS)
Hoer, Cletus A.
1989-07-01
The systematic error in measuring power with a dual 6-port Automatic Network Analyzer was determined. Equations for estimating systematic errors due to imperfections in the test port connector, imperfections in the connector on the power standard, and imperfections in the impedance standards used to calibrate the 6-port for measuring reflection coefficient were developed. These are the largest sources of error associated with the 6-port. For 7 mm connectors, all systematic errors which are associated with the 6-port add up to a worst-case uncertainty of + or - 0.00084 in measuring the ratio of the effective efficiency of a bolometric power sensor relative to that of a standard power sensor.
Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.
2013-06-01
We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.
On systematic single asymmetric error-correcting codes
Bella Bose; Sulaiman Al-bassam
2000-01-01
It is proved that for all values of code length n, except when n=2, 4, and 8 and possibly when n=2r and n=2r+1, where r⩾1, the Hamming codes are also optimal systematic single asymmetric error-correcting codes. For the cases n=2r and n=2 r+1, r⩾4, when not all information words are used, two efficient systematic 1-asymmetric codes are described
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Efficient error estimation in quantum key distribution
NASA Astrophysics Data System (ADS)
Li, Mo; Treeviriyanupab, Patcharapong; Zhang, Chun-Mei; Yin, Zhen-Qiang; Chen, Wei; Han, Zheng-Fu
2015-01-01
In a quantum key distribution (QKD) system, the error rate needs to be estimated for determining the joint probability distribution between legitimate parties, and for improving the performance of key reconciliation. We propose an efficient error estimation scheme for QKD, which is called parity comparison method (PCM). In the proposed method, the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling. From the simulation results, the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations. Project supported by the National Basic Research Program of China (Grant Nos.2011CBA00200 and 2011CB921200) and the National Natural Science Foundation of China (Grant Nos.61101137, 61201239, and 61205118).
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan
2012-07-10
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
The Effect of Systematic Error in Forced Oscillation Testing
NASA Technical Reports Server (NTRS)
Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.
2012-01-01
One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.
Reducing Measurement Error in Student Achievement Estimation
ERIC Educational Resources Information Center
Battauz, Michela; Bellio, Ruggero; Gori, Enrico
2008-01-01
The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…
Estimation of Error Rates in Discriminant Analysis
Peter A. Lachenbruch; M. Ray Mickey
1968-01-01
Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of misclassification for different combinations of sample sizes and different numbers of parameters. The two methods in most common use are found to be significantly poorer than some new methods that are proposed.
Reducing systematic errors in measurements made by a SQUID magnetometer
NASA Astrophysics Data System (ADS)
Kiss, L. F.; Kaptás, D.; Balogh, J.
2014-11-01
A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: AN ASSESSMENT OF THE BA-CU AND CU-Y PHASE DIAGRAM
Rudnyi, Evgenii B.
STATISTICAL MODEL OF SYSTEMATIC ERRORS: AN ASSESSMENT OF THE BA-CU AND CU-Y PHASE DIAGRAM E, mathematical statistics suggest to us random and mixed models [1] wherein deviations from the theoretical model experimental errors. The application of a method devised in mathematical statistics, i.e., The Estimation
STANDARD ERRORS IN SYSTEMATIC SAMPLING Chris Glasbey & Stijn Bierman
Glasbey, Chris
Scotland #12;How many acorns are under an oak tree? 2 #12;Standard estimator of total is ^T = M n i=1 yi, 1977) 5 #12;For this problem we know the truth, a complete census: T = 85306 acorns (Approximated from-based inference with acorn data: · Systematic sampling is unknowably most precise. In absence of trend, other
Corrected Rasch asymptotic standard errors for person ability estimates.
Smith, R M
1998-01-01
Most calibration programs designed for the family of Rasch psychometric models report the asymptotic standard errors for person and item measure estimates resulting from the calibration process. Although these estimates are theoretically correct, they may be influenced by any number of factors, e.g., restrictions due to the loss of degrees of freedom in estimation, targeting of the instrument, i.e., the degree of offset between mean item difficulty and mean person ability, and the presence of misfit in the data. The effect of these factors on the standard errors reported for the person has not been previously reported. The purpose of this study was to investigate the effects of these three factors on the asymptotic standard errors for person measures using simulated data. The results indicate that asymptotic errors systematically underestimate the observed standard deviation of ability in simulated data, though this underestimation is usually small for targeted instruments with reasonable sample size. However, the underestimation can easily be corrected with a simple linear function. These simulations use only dichotomous data and the results may not generalize to the rating scale and partial credit models. PMID:9803720
Standard Errors of Mean, Variance, and Standard Deviation Estimators
Fessler, Jeffrey A.
1 Standard Errors of Mean, Variance, and Standard Deviation Estimators Sangtae Ahn and Jeffrey A the mean, variance, or standard deviation from a sample of elements and present the estimates with standard errors or error bars (in plots) as well. A standard error of a statistic (or estimator
Application of systematic error bounds to detection limits for practical counting.
Mayer, D; Dauer, L
1993-07-01
Overly optimistic estimates of detection limits can result in the use of unrealistic conservatism for decisions about the presence of activity. In some practical counting situations, overly conservative detection limits can result in economically impractical actions. To help preclude such actions, systematic error bounds, uncertainties, and confidence levels can be used when determining critical levels (Lc), detection limits (Ld), and minimum detectable concentrations. This note discusses the selection of such error bounds and the development of detection limit parameters for practical applications. These parameters are shown to be successfully employed in sample activity and measurement process capability decisions for typical counting instruments. PMID:8505234
Martin, D.L.
1992-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.
Statistical and systematic errors in redshift-space distortion measurements from large surveys
NASA Astrophysics Data System (ADS)
Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.
2012-12-01
We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate ? = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ?(rp, ?) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on ? as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.
Ultraspectral Sounding Retrieval Error Budget and Estimation
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2011-01-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..
Error concealment using multiresolution motion estimation
NASA Astrophysics Data System (ADS)
Tsai, Augustine; Wiener, Stephen M.; Wilder, Joseph
1995-10-01
An error concealment scheme for MPEG video networking is presented. Cell loss occurs in the presence of network congestion and buffer overflow. This phenomenon of cell loss transforms into lost image blocks in the decoding process, which can severely degrade the viewing quality. The new method differs from the conventional concealment by its exploitation of spatial and temporal redundancies in large scale. The motion estimation is carried out by registering images within a multiresolution pyramid. The global motion is estimated in the lowest resolution level, and is then used to update and refine the local motion. The local motion is further refined iteratively at higher resolution levels. An affine transform is used to extract translation, scaling and rotation parameters. In many applications where there is significant camera motion (e.g., remote surveillance), the new method performs better than the conventional concealment.
Factoring Algebraic Error for Relative Pose Estimation
Lindstrom, P; Duchaineau, M
2009-03-09
We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.
Volume estimation of biological objects by systematic sections.
Mattfeldt, T
1987-01-01
The absolute volume of biological objects is often estimated stereologically from an exhaustive set of systematic sections. The usual volume estimator V is the sum of the section contents times the distance between sections. For systematic sectioning with a random start, it has been recently shown that V is unbiased when m, the ratio between projected object length and section distance, is an integer number (Cruz-Orive 1985). As this quantity is no integer in the real world, we have explored the properties of V in the general and realistic situation m epsilon R. The unbiasedness of V under appropriate sampling conditions is demonstrated for the arbitrary compact set in 3 dimensions by a rigorous proof. Exploration of further properties of V for the general triaxial ellipsoid leads to a new class of non-elementary real functions with common formal structure which we denote as np-functions. The relative mean square error (CE2) of V in ellipsoids is an oscillating differentiable np-function, which reduces to the known result CE2 = 1/(5m4) for integer m. As a biological example the absolute volumes of 10 left cardiac ventricles and their internal cavities were estimated from systematic sections. Monte Carlo simulation of replicated systematic sectioning is shown to be improved by using m epsilon R instead of m epsilon N. In agreement with the geometric model of ellipsoids with some added shape irregularities, mean empirical CE was proportional to m-1.36 and m-1.73 in the cardiac ventricle and its cavity. The considerable variance reduction by systematic sectioning is shown to be a geometric realization of the principle of antithetic variates. PMID:3437233
Iraq War mortality estimates: A systematic review
Tapp, Christine; Burkle, Frederick M; Wilson, Kumanan; Takaro, Tim; Guyatt, Gordon H; Amad, Hani; Mills, Edward J
2008-01-01
Background In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review the studies, reports and counts on Iraqi deaths since the start of the war and assessed their methodological quality and results. Methods We performed a systematic search of 15 electronic databases from inception to January 2008. In addition, we conducted a non-structured search of 3 other databases, reviewed study reference lists and contacted subject matter experts. We included studies that provided estimates of Iraqi deaths based on primary research over a reported period of time since the invasion. We excluded studies that summarized mortality estimates and combined non-fatal injuries and also studies of specific sub-populations, e.g. under-5 mortality. We calculated crude and cause-specific mortality rates attributable to violence and average deaths per day for each study, where not already provided. Results Thirteen studies met the eligibility criteria. The studies used a wide range of methodologies, varying from sentinel-data collection to population-based surveys. Studies assessed as the highest quality, those using population-based methods, yielded the highest estimates. Average deaths per day ranged from 48 to 759. The cause-specific mortality rates attributable to violence ranged from 0.64 to 10.25 per 1,000 per year. Conclusion Our review indicates that, despite varying estimates, the mortality burden of the war and its sequelae on Iraq is large. The use of established epidemiological methods is rare. This review illustrates the pressing need to promote sound epidemiologic approaches to determining mortality estimates and to establish guidelines for policy-makers, the media and the public on how to interpret these estimates. PMID:18328100
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
A posteriori pointwise error estimates for the boundary element method
Paulino, G.H.; Gray, L.J.; Zarikian, V.
1995-01-01
This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.
A Test for Large-Scale Systematic Errors in Maps of Galactic Reddening
Michael J. Hudson
1998-12-19
Accurate maps of Galactic reddening are important for a number of applications, such as mapping the peculiar velocity field in the nearby Universe. Of particular concern are systematic errors which vary slowly as a function of position on the sky, as these would induce spurious bulk flow. We have compared the reddenings of Burstein & Heiles (BH) and those of Schlegel, Finkbeiner & Davis (SFD) to independent estimates of the reddening, for Galactic latitudes |b| > 10. Our primary source of Galactic reddening estimates comes from comparing the difference between the observed B-V colors of early-type galaxies, and the predicted B-V color determined from the B-V--Mg_2 relation. We have fitted a dipole to the residuals in order to look for large-scale systematic deviations. There is marginal evidence for a dipolar residual in the comparison between the SFD maps and the observed early-type galaxy reddenings. If this is due to an error in the SFD maps, then it can be corrected with a small (13%) multiplicative dipole term. We argue, however, that this difference is more likely to be due to a small (0.01 mag.) systematic error in the measured B-V colors of the early-type galaxies. This interpretation is supported by a smaller, independent data set (globular cluster and RR Lyrae stars), which yields a result inconsistent with the early-type galaxy residual dipole. BH reddenings are found to have no significant systematic residuals, apart from the known problem in the region 230 < l < 310, -20 < b < 20.
A study of systematic errors in the PMD CamBoard nano
NASA Astrophysics Data System (ADS)
Chow, Jacky C. K.; Lichti, Derek D.
2013-04-01
Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.
Systematic vertical error in UAV-derived topographic models: Origins and solutions
NASA Astrophysics Data System (ADS)
James, Mike R.; Robson, Stuart
2014-05-01
Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example, images collected during gently banked turns of a fixed-wing UAV or, if camera inclination can be altered, by just a few more oblique images with a rotor-based UAV. We provide practical flight plan solutions that, in the absence of control points, demonstrate a reduction in systematic DEM error by more than two orders of magnitude. DEM generation is subject to this effect whether a traditional photogrammetry or newer structure-from-motion (SfM) processing approach is used, but errors will be typically more pronounced in SfM-based DEMs, for which use of control measurements is often more limited. Although focussed on UAV surveying, our results are also relevant to ground-based image capture for SfM-based modelling.
Prediction and simulation errors in parameter estimation for nonlinear systems
NASA Astrophysics Data System (ADS)
Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.
2010-11-01
This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.
Standard Error of Empirical Bayes Estimate in NONMEM® VI.
Kang, Dongwoo; Bae, Kyun-Seop; Houk, Brett E; Savic, Radojka M; Karlsson, Mats O
2012-04-01
The pharmacokinetics/pharmacodynamics analysis software NONMEM® output provides model parameter estimates and associated standard errors. However, the standard error of empirical Bayes estimates of inter-subject variability is not available. A simple and direct method for estimating standard error of the empirical Bayes estimates of inter-subject variability using the NONMEM® VI internal matrix POSTV is developed and applied to several pharmacokinetic models using intensively or sparsely sampled data for demonstration and to evaluate performance. The computed standard error is in general similar to the results from other post-processing methods and the degree of difference, if any, depends on the employed estimation options. PMID:22563254
Standard Error of Empirical Bayes Estimate in NONMEM® VI
Kang, Dongwoo; Houk, Brett E.; Savic, Radojka M.; Karlsson, Mats O.
2012-01-01
The pharmacokinetics/pharmacodynamics analysis software NONMEM® output provides model parameter estimates and associated standard errors. However, the standard error of empirical Bayes estimates of inter-subject variability is not available. A simple and direct method for estimating standard error of the empirical Bayes estimates of inter-subject variability using the NONMEM® VI internal matrix POSTV is developed and applied to several pharmacokinetic models using intensively or sparsely sampled data for demonstration and to evaluate performance. The computed standard error is in general similar to the results from other post-processing methods and the degree of difference, if any, depends on the employed estimation options. PMID:22563254
Systematic Errors in GNSS Radio Occultation Data - Part 2
NASA Astrophysics Data System (ADS)
Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc
2014-05-01
The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows to compute sensitivities to changes in atmospheric composition, where we found that the effect of the CO2 increase is currently almost exactly balanced by the counteracting effect of the concurrent O2 decrease.
Systematic intensity errors caused by spectral truncation: origin and remedy.
Lenstra, A T; Van Loock, J F; Rousseau, B; Maes, S T
2001-11-01
The wavelength dispersion of graphite(002)-monochromated X-ray beams has been determined for a Cu, a Mo and an Rh tube. The observed values for Deltalambda/lambda were 0.03, 0.14 and 0.16, respectively. The severe reduction in monochromaticity as a function of wavelength is determined by the absorption coefficient mu of the monochromator. Mu(monochromator) varies with lambda3. For an Si monochromator with its much larger absorption coefficient, Deltalambda/lambda values of 0.03 were found, regardless of the X-ray tube. This value matches a beam divergence defined by the size of the focus and of the crystal. This holds as long as the monochromator acts as a mirror, i.e. mu(monochromator) is large. In addition to monochromaticity, homogeneity of the X-ray beam is also an important factor. For this aspect the mosaicity of the monochromator is vital. In cases like Si, in which mosaicity is practically absent, the reflected X-ray beam shows an intensity distribution equal to the mass projection of the filament on the anode. Smearing by mosaicity generates a homogeneous beam. This makes a graphite monochromator attractive in spite of its poor performance as a monochromator for lambda < 1 A. This choice means that scan-angle-induced spectral truncation errors are here to stay. These systematic intensity errors can be taken into account after measurement by a software correction based on the real beam spectrum and the applied measuring mode. A spectral modeling routine is proposed, which is applied on the graphite-monochromated Mo Kalpha beam. Both elements in that spectrum, i.e. characteristic alpha1 and alpha2 emission lines and the Bremsstrahlung, were analyzed using the 6,3,18 reflection of Al2O3 (s = 1.2 A(-1)). The spectral information obtained was used to calculate the truncation errors for intensities measured in an omega/2theta scan mode. The results underline the correctness of previous work on the structure of NiSO4*6H2O [Rousseau, Maes & Lenstra (2000). Acta Cryst. A56, 300-307]. PMID:11679692
CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes
NASA Technical Reports Server (NTRS)
Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.
2012-01-01
Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.
A Note on Confidence Interval Estimation and Margin of Error
ERIC Educational Resources Information Center
Gilliland, Dennis; Melfi, Vince
2010-01-01
Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…
Systematic single limited magnitude asymmetric error correcting codes
T. Kløve; Bella Bose; Noha Elarief
2010-01-01
In the asymmetric error model, a symbol a over an alphabet Zq = {0,1,..., q - 1} may be modified during transmission into b, where b <; a (assuming that the dominant error type is the decreasing error). For some applications, the error magnitude b - a is not likely to exceed a certain threshold l. One such application is
NASA Astrophysics Data System (ADS)
Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.
2012-04-01
The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp boundaries such as the Sky-Earth boundary. Data acquired over the Ocean rather than over Land are prefered to characterize such errors because the variability of the emissivity sensed over the oceanic domain is an order of magnitude smaller than over land. Nevertheless, characterizing such errors over the Ocean is not a trivial task. Even if the natural variability is small, it is larger than the errors to be characterized and the characterization strategy must account for it otherwise the estimated patterns will unfortunately vary significantly with the selected dataset. The communication will present results on a systematic error characterization methodology allowing stable error pattern estimates. Particular focus will be given to the critical data selection strategy and the analysis of the X- and Y-pol patterns obtained over a wide range of SMOS subdatasets. Impact of some image reconstruction options will be evaluated. It will be shown how the methodology is also an interesting tool to diagnose specific error sources. Criticality of accurate description of Faraday rotation effects will be evidenced and latest results about the possibility to infer such information from full Stokes vector will be presented.
Runge-Kutta formula with cheap error estimate
Shampine, L.F.; Baca, L.S.
1984-12-01
A Runge-Kutta formula with a local error estimate derived from Simpson's rule has enjoyed some popularity in continuous simulation because the error estimate is free and because Hermite interpolation provides and approximate solution between steps. The procedure is analyzed here and compared to an alternative due to England. Interpolation can be justified for both procedures. The Simpson error estimate turns out to be badly flawed, but even if it were not, England's scheme would be substantially more efficient.
Small Sample Error Rate Estimation for k-NN Classifiers
Sholom M. Weiss
1991-01-01
Small sample error rate estimators for nearest-neighbor classifiers are examined and contrasted with the same estimators for three-nearest-neighbor classifiers. The performance of the bootstrap estimators, e0 and 0.632B, is considered relative to leaving-one-out and other cross-validation estimators. Monte Carlo simulations are used to measure the performance of the error-rate estimators. The experimental results are compared to previously reported simulations for
CMB Beam Systematics: Impact on Lensing Parameter Estimation
N. J. Miller; M. Shimon; B. G. Keating
2009-03-20
The CMB's B-mode polarization provides a handle on several cosmological parameters most notably the tensor-to-scalar ratio, $r$, and is sensitive to parameters which govern the growth of large scale structure (LSS) and evolution of the gravitational potential. The primordial gravitational-wave- and secondary lensing-induced B-mode signals are very weak and therefore prone to various foregrounds and systematics. In this work we use Fisher-matrix-based estimations and apply, for the first time, Monte-Carlo Markov Chain (MCMC) simulations to determine the effect of beam systematics on the inferred cosmological parameters from five upcoming experiments: PLANCK, POLARBEAR, SPIDER, QUIET+CLOVER and CMBPOL. We consider beam systematics which couple the beam substructure to the gradient of temperature anisotropy and polarization (differential beamwidth, pointing and ellipticity) and beam systematics due to differential beam normalization (differential gain) and orientation (beam rotation) of the polarization-sensitive axes (the latter two effects are insensitive to the beam substructure). We determine allowable levels of beam systematics for given tolerances on the induced parameter errors and check for possible biases in the inferred parameters concomitant with potential increases in the statistical uncertainty. All our results are scaled to the 'worst case scenario'. In this case and for our tolerance levels, the beam rotation should not exceed the few-degree to sub-degree level, typical ellipticity is required to be 1% level, the differential gain allowed level is a few parts in $10^{3}$ to $10^{4}$, differential beamwidth upper limits are of the sub-percent level and differential pointing should not exceed the few- to sub-arcsec level.
Sample covariance based estimation of Capon algorithm error probabilities
Richmond, Christ D.
The method of interval estimation (MIE) provides a strategy for mean squared error (MSE) prediction of algorithm performance at low signal-to-noise ratios (SNR) below estimation threshold where asymptotic predictions fail. ...
Probabilistic state estimation in regimes of nonlinear error growth
Lawson, W. Gregory, 1975-
2005-01-01
State estimation, or data assimilation as it is often called, is a key component of numerical weather prediction (NWP). Nearly all implementable methods of state estimation suitable for NWP are forced to assume that errors ...
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS
Hartmann, Ralf
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract of uncertainty quantification, to estimate the error in the computed quantities. In recent years a posteriori and efficient computation of single target quantities. The current ap- proaches are based on computing
Error Estimates for a Stochastic Impulse Control Problem
Bonnans, Frederic, E-mail: Frederic.Bonnas@inria.fr; Maroso, Stefania, E-mail: maroso@cmapx.polytechnique.fr; Zidani, Housnaa [CMAP, Ecole Polytechnique, INRIA-Futurs (France)], E-mail: Hasnaa.Zidani@ensta.fr
2007-05-15
We obtain error bounds for monotone approximation schemes of a stochastic impulse control problem. This is an extension of the theory for error estimates for the Hamilton-Jacobi-Bellman equation. We obtain almost the same estimate on the rate of convergence as in the equation without impulsions.
Finite element error estimation and adaptivity based on projected stresses
Jung, J.
1990-08-01
This report investigates the behavior of a family of finite element error estimators based on projected stresses, i.e., continuous stresses that are a least squared error fit to the conventional Gauss point stresses. An error estimate based on element force equilibrium appears to be quite effective. Examples of adaptive mesh refinement for a one-dimensional problem are presented. Plans for two-dimensional adaptivity are discussed. 12 refs., 82 figs.
MEAN SQUARED ERROR ESTIMATION FOR SMALL AREAS WHEN THE SMALL AREA VARIANCES ARE ESTIMATED
Rivest, Louis-Paul
for a small area, uncertainty concerning the estimation of the small area variances 2 i . The impact of addingMEAN SQUARED ERROR ESTIMATION FOR SMALL AREAS WHEN THE SMALL AREA VARIANCES ARE ESTIMATED Louis a generalization to Prasad and Rao's estimator for the mean squared errors of small area estimators. This new
Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes
Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.
2011-01-01
A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions. PMID:21666841
Projecting the standard error of the Kaplan-Meier estimator.
Cantor, A B
2001-07-30
Clinical studies in which a major objective is to produce Kaplan-Meier estimates of survival probabilities should be designed to produce those estimates with a desired prespecified precision as measured by their standard errors. By considering the Peto and Greenwood formulae for the estimated standard error of the Kaplan-Meier estimate and replacing their constituents with expected values based on the study's design parameters, formulae for projected standard errors can be produced. These formulae are shown, through simulations, to be quite accurate. PMID:11439423
Parameter estimation and error analysis in environmental modeling and computation
NASA Technical Reports Server (NTRS)
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.
2003-01-01
We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
NASA Astrophysics Data System (ADS)
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.
A review on the impact of systematic safety processes for the control of error in medicine.
Damiani, Gianfranco; Pinnarelli, Luigi; Scopelliti, Lucia; Sommella, Lorenzo; Ricciardi, Walter
2009-07-01
Among risk management initiatives, systematic safety processes (SSPs), implemented within health care organizations, could be useful in managing patient safety. The purpose of this article is to conduct a systematic literature review assessing the impact of SSPs on different error categories. Articles that investigated the relation between SSPs, clinical and organizational outcomes were selected from scientific literature. The proportion and impact of proactive and reactive SSPs were calculated among five error categories. Proactive interventions impacted more positively than reactive ones in reducing medication errors, technical errors and errors due to personnel. PSSPs and RSSPs had similar effects in reducing errors related to a wrong procedure. A single reactive study influenced non-positively communication errors. A relevant prevalence of the impact of proactive processes on reactive ones is reported. This article can help decision makers in identifying which SSP can be the most appropriate against specific error categories. PMID:19564841
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Brown, Richard J C; Roberts, Matthew R; Milton, Martin J T
2007-03-21
Calibrations involving the sequential addition of aliquots of a standard solution to a solution of unknown analyte content may exhibit a systematic error. We show that this systematic error is related to the ratio of the mass fractions in the standard and unknown solutions. This relationship is consistent with experimental results from the determination of lead in aqueous solution by anodic stripping voltammetry using 'Sequential' Standard Addition Calibration (S-SAC). The magnitude of this systematic error has been described mathematically and a correction calculated. These mathematical relationships form the basis for a proposal for best practice in the use of S-SAC. PMID:17386768
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
NASA Astrophysics Data System (ADS)
Jung, Jaehoon; Kim, Sangpil; Hong, Sungchul; Kim, Kyoungmin; Kim, Eunsook; Im, Jungho; Heo, Joon
2013-07-01
This paper suggested simulation approaches for quantifying and reducing the effects of National Forest Inventory (NFI) plot location error on aboveground forest biomass and carbon stock estimation using the k-Nearest Neighbor (kNN) algorithm. Additionally, the effects of plot location error in pre-GPS and GPS NFI plots were compared. Two South Korean cities, Sejong and Daejeon, were chosen to represent the study area, for which four Landsat TM images were collected together with two NFI datasets established in both the pre-GPS and GPS eras. The effects of plot location error were investigated in two ways: systematic error simulation, and random error simulation. Systematic error simulation was conducted to determine the effect of plot location error due to mis-registration. All of the NFI plots were successively moved against the satellite image in 360° directions, and the systematic error patterns were analyzed on the basis of the changes of the Root Mean Square Error (RMSE) of kNN estimation. In the random error simulation, the inherent random location errors in NFI plots were quantified by Monte Carlo simulation. After removal of both the estimated systematic and random location errors from the NFI plots, the RMSE% were reduced by 11.7% and 17.7% for the two pre-GPS-era datasets, and by 5.5% and 8.0% for the two GPS-era datasets. The experimental results showed that the pre-GPS NFI plots were more subject to plot location error than were the GPS NFI plots. This study's findings demonstrate a potential remedy for reducing NFI plot location errors which may improve the accuracy of carbon stock estimation in a practical manner, particularly in the case of pre-GPS NFI data.
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Bias in parameter estimation of form errors
NASA Astrophysics Data System (ADS)
Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min
2014-09-01
The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.
Bootstrap Estimates of Standard Errors in Generalizability Theory
ERIC Educational Resources Information Center
Tong, Ye; Brennan, Robert L.
2007-01-01
Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…
Optimal, systematic q-ary codes correcting all asymmetric errors of limited magnitude
Noha Elarief; Bella Bose
2009-01-01
Systematic q-ary (q > 2) codes capable of correcting all asymmetric errors of maximum magnitude l, where l ¿ q-2, are given. These codes are shown to be optimal. Further, simple encoding\\/decoding algorithms are described.
Radon measurements--discussion of error estimates for selected methods.
Zhukovsky, Michael; Onischenko, Alexandra; Bastrikov, Vladislav
2010-01-01
The main sources of uncertainties for grab sampling, short-term (charcoal canisters) and long term (track detectors) measurements are: systematic bias of reference equipment; random Poisson and non-Poisson errors during calibration; random Poisson and non-Poisson errors during measurements. The origins of non-Poisson random errors during calibration are different for different kinds of instrumental measurements. The main sources of uncertainties for retrospective measurements conducted by surface traps techniques can be divided in two groups: errors of surface (210)Pb ((210)Po) activity measurements and uncertainties of transfer from (210)Pb surface activity in glass objects to average radon concentration during this object exposure. It's shown that total measurement error of surface trap retrospective technique can be decreased to 35%. PMID:19822441
A posteriori error estimates for parabolic variational inequalities Yves Achdou
Achdou, Yves
, David Pommier . July 12, 2008 Abstract We study a posteriori error estimates in the energy norm for some contact set. The case when the obstacle is piecewise affine is studied before the general case. Numerical spatial approximation. We discuss the reliability and efficiency of the error indicators, as well
Color Estimation Error Trade-offs Ulrich Barnhfer*a
Wandell, Brian A.
Color Estimation Error Trade-offs Ulrich Barnhöfer*a , Jeffrey M. DiCarloa , Ben Oldingc and Brian be transformed to calibrated (human) color representations for display or print reproduction. Errors in these color rendering transformations can arise from a variety of sources, including (a) noise
Error and tolerance estimates for the SSC CCL
Bhatia, T.S.; Garnett, R.W.; Neuschaefer, G.H. ); Crandall, K.R. )
1990-01-01
A new code, CCLTRACE, has been used to estimate error and tolerance limits for two possible examples of a 1284-MHz, 70--600 MeV CCL for the SSC Linac. By calculating the dynamics of the beam center as well as the beam ellipsoid, CCLTRACE can efficiently perform error studies using Monte Carlo techniques.
Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications
Garcia, Tanya
2012-10-19
Many statistical models, like measurement error models, a general class of survival models, and a mixture data model with random censoring, are semiparametric where interest lies in estimating finite-dimensional parameters ...
NASA Astrophysics Data System (ADS)
Huang, J.; Véronneau, M.; Mainville, A.
2008-10-01
The surface gravity data collected via traditional techniques such as ground-based, shipboard and airborne gravimetry describe precisely the local gravity field, but they are often biased by systematic errors. On the other hand, the spherical harmonic gravity models determined from satellite missions, in particular, recent models from CHAMP and GRACE, homogenously and accurately describe the low-degree components of the Earth's gravity field. However, they are subject to large omission errors. The surface and satellite gravity data are therefore complementary in terms of spectral composition. In this paper, we aim to assess the systematic errors of low spherical harmonic degrees in the surface gravity anomalies over North America using a GRACE gravity model. A prerequisite is the extraction of the low-degree components from the surface data to make them compatible with GRACE data. Three types of methods are tested using synthetic data: low-pass filtering, the inverse Stokes integral, and spherical harmonic analysis. The results demonstrate that the spherical harmonic analysis works best. Eighty-five per cent of difference between the synthetic gravity anomalies generated from EGM96 and GGM02S from degrees 2 to 90 can be modelled for a region covering North America and neighbouring areas. Assuming EGM96 is developed solely from the surface gravity data with the same accuracy and GGM02S errorless, one way to understand the 85 per cent difference is that it represents the systematic error from the region of study, while the remaining 15 per cent originates from the data outside of the region. To estimate systematic errors in the surface gravity data, Helmert gravity anomalies are generated from both surface and GRACE data on the geoid. Their differences are expanded into surface spherical harmonics. The results show that the systematic errors for degrees 2 to 90 range from about -6 to 13 mGal with a RMS value of 1.4 mGal over North America. A few significant data gaps can be identified from the resulting error map. The errors over oceans appear to be related to the sea surface topography. These systematic errors must be taken into consideration when the surface gravity data are used to validate future satellite gravity missions.
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Systematic errors in extracting nucleon properties from lattice QCD
Stefano Capitani; Michele Della Morte; Bastian Knippschild; Hartmut Wittig
2010-01-01
Form factors of the nucleon have been extracted from experiment with high precision. However, lattice calculations have failed so far to reproduce the observed dependence of form factors on the momentum transfer. We have embarked on a program to thoroughly investigate systematic effects in lattice calculation of the required three-point correlation functions. Here we focus on the possible contamination from
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2013-10-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2014-04-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
Bioinformatics 2007 An efficient method for the detection and elimination of systematic error in
Makarenkov, Vladimir
of the new method com- pared to other widely-used methods of data correction and hit selection in HTS. 1), Makarenkov et al. (2006), Malo et al. (2006), and Gagarin et al. (2006a). Various sources of systematic (Heuer et al. 2003), including: · Systematic errors caused by ageing, reagent evaporation or cell decay
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.
Optimal, systematic, q-ary codes correcting all asymmetric and symmetric errors of limited magnitude
Noha Elarief; Bella Bose
2010-01-01
Systematic q-ary (q > 2) codes capable of correcting all asymmetric errors of maximum magnitude l , where l ¿ q - 2, are given. These codes are shown to be optimal. Further, simple encoding\\/decoding algorithms are described. The proposed code can be modified to design codes correcting all symmetric errors of maximum magnitude l, where l ¿ (q-2)\\/2.
Superconvergent lift estimates through adjoint error analysis
Pierce, Niles A.
flux through a turbomachine, or total heat flux into a turbine blade. The rest of the flow solution the order of accuracy of the flow solution. The theory is presented for both linear and nonlinear, usually doubled, compared to the accuracy of the flow solution on which the estimate is based
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Period Error Estimation for the Kepler Eclipsing Binary Catalog
NASA Astrophysics Data System (ADS)
Mighell, Kenneth J.; Plavchan, Peter
2013-06-01
The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg2 Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log ? P ? - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods >=62.5 days have KEBC period errors of ~0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.
PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG
Mighell, Kenneth J.; Plavchan, Peter
2013-06-15
The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
Estimating linear functionals of the error distribution
Wefelmeyer, Wolfgang
, i.e. h(z) = z2. Estimation of the residuals is particularly simple if the regression function r = r to explosive autoregressive models are in Koul and Leventhal (1989). Here we are concerned/2 n i=1 h(^i) = n-1/2 n i=1 (h(i) - E[h ()]i) + op(1).(1.1) Simple sufficient conditions would
Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis
ERIC Educational Resources Information Center
Sass, Daniel A.
2010-01-01
Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…
Estimation in Semiparametric Transition Measurement Error Models for Longitudinal Data
Pan, Wenqin; Zeng, Donglin; Lin, Xihong
2013-01-01
SUMMARY We consider semiparametric transition measurement error models for longitudinal data, where one of the covariates is measured with error in transition models, and no distributional assumption is made for the underlying unobserved covariate. An estimating equation approach based on the pseudo conditional score method is proposed. We show the resulting estimators of the regression coefficients are consistent and asymptotically normal. We also discuss the issue of efficiency loss. Simulation studies are conducted to examine the finite-sample performance of our estimators. The longitudinal AIDS Costs and Services Utilization Survey data are analyzed for illustration. PMID:19173696
Systematic errors in extracting nucleon properties from lattice QCD
Capitani, Stefano; Knippschild, Bastian; Wittig, Hartmut
2010-01-01
Form factors of the nucleon have been extracted from experiment with high precision. However, lattice calculations have failed so far to reproduce the observed dependence of form factors on the momentum transfer. We have embarked on a program to thoroughly investigate systematic effects in lattice calculation of the required three-point correlation functions. Here we focus on the possible contamination from higher excited states and present a method which is designed to suppress them. Its effectiveness is tested for several baryonic matrix elements, different lattice sizes and pion masses.
Systematic errors in extracting nucleon properties from lattice QCD
Stefano Capitani; Michele Della Morte; Bastian Knippschild; Hartmut Wittig
2010-11-05
Form factors of the nucleon have been extracted from experiment with high precision. However, lattice calculations have failed so far to reproduce the observed dependence of form factors on the momentum transfer. We have embarked on a program to thoroughly investigate systematic effects in lattice calculation of the required three-point correlation functions. Here we focus on the possible contamination from higher excited states and present a method which is designed to suppress them. Its effectiveness is tested for several baryonic matrix elements, different lattice sizes and pion masses.
Error Estimates for Raymond Effect Dating Using Adjoint Sensitivities
NASA Astrophysics Data System (ADS)
Hindmarsh, R. C. A.; Sergienko, O. V.
2012-04-01
Raymond Effect Dating is a tool for determining the age of the last disturbance of an ice-rise. It works by timing the rate of creation of folds (Raymond Arches) under an ice-divide, which evolve underneath a divide; the rate is determined by the ice rheology, accumulation rate and thinning rate. The size and elevation of the fold give the age. In general, the thinning rate and formation time are inferred parameters from observed fold architecture; sometimes rheology is included as an inversion parameter. Solutions are non-unique. So far error estimation of the formation age and other inversion parameteres has been restricted to sensitivity studies. We carry out a more formal study, quantifying the error in the inversion parameter both in relation to the data errors, and to errors in assumed parameters such as the rheological index. Scaling analysis shows that velocity fields scale with ice thickness and thence with vertical velocity. We can therefore decouple the flow problem, a non-linear Stokes flow problem, from the evolution problem, deriving a simple first-order 1D equation for the evolution of arch amplitude with depth. We carry out an adjoint study of the evolution of Raymond bumps, using the Lagrange multiplier to examine sensitivity of the solution to data errors. Errors scale with the advection time-scale, but are also dependent on the progress of the creation of Raymond Arches. We quantify the errors, with particular attention to the sensitivity of the error estimates to observation depth.
Mean square error optimal weighting for multitaper cepstrum estimation
NASA Astrophysics Data System (ADS)
Hansson-Sandsten, Maria
2013-12-01
The aim of this paper is to find a multitaper-based spectrum estimator that is mean square error optimal for cepstrum coefficient estimation. The multitaper spectrum estimator consists of windowed periodograms which are weighted together, where the weights are optimized using the Taylor expansion of the log-spectrum variance and a novel approximation for the log-spectrum bias. A thorough discussion and evaluation are also made for different bias approximations for the log-spectrum of multitaper estimators. The optimized weights are applied together with the sinusoidal tapers as the multitaper estimator. Comparisons of the cepstrum mean square error are made of some known multitaper methods as well as with the parametric autoregressive estimator for simulated speech signals.
Xiong, Kun; Jiang, Jie
2015-01-01
Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920
SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors
Kathuria, K; Siebers, J [University of Virginia Health System, Charlottesville, VA (United States)
2014-06-01
Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and are as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.
foraminifera, which are subject to potentially large systematic errors.
Fields, Stan
with the estimated trends in the production of radiocarbon by cosmic rays--a comparison that seems to demand. The low-14C periods coincide with the periods when atmospheric radiocarbon decreased and atmospheric CO2 is circumstantial. The results help to reconcile the recon- structed trends in atmospheric radiocarbon
SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.
QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.
2007-08-25
Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.
The nature of the systematic radiometric error in the MGS TES spectra
NASA Astrophysics Data System (ADS)
Pankine, Alexey A.
2015-05-01
Several systematic radiometric errors are known to affect the data collected by the Thermal Emission Spectrometer (TES) onboard Mars Global Surveyor (MGS). The time-varying wavenumber dependent error that significantly increased in magnitude as the MGS mission progressed is discussed in detail. This error mostly affects spectra of cold (nighttime and polar caps) surfaces and atmospheric spectra in limb viewing geometry. It is proposed here that the source of the radiometric error is a periodic sampling error of the TES interferograms. A simple model of the error is developed that allows predicting its spectral shape for any viewing geometry based on the observed uncalibrated spectrum. Comparison of the radiometric errors observed in the TES spaceviews and those predicted by the model shows an excellent agreement. Spectral shapes of the errors for nadir and limb spectra are simulated based on representative TES spectra. In nighttime and limb spectra, and in spectra of cold polar regions, these radiometric errors can result in an error of ±3-5 K in the retrieved atmospheric and surface temperatures, and significant errors in retrieved opacities of atmospheric aerosols. The model of the TES radiometric error presented here can be used to improve the accuracy of the TES retrievals and increase scientific return from the MGS mission.
Geodynamo model and error parameter estimation using geomagnetic data assimilation
NASA Astrophysics Data System (ADS)
Tangborn, Andrew; Kuang, Weijia
2015-01-01
We have developed a new geomagnetic data assimilation approach which uses the minimum variance' estimate for the analysis state, and which models both the forecast (or model output) and observation errors using an empirical approach and parameter tuning. This system is used in a series of assimilation experiments using Gauss coefficients (hereafter referred to as observational data) from the GUFM1 and CM4 field models for the years 1590-1990. We show that this assimilation system could be used to improve our knowledge of model parameters, model errors and the dynamical consistency of observation errors, by comparing forecasts of the magnetic field with the observations every 20 yr. Statistics of differences between observation and forecast (O - F) are used to determine how forecast accuracy depends on the Rayleigh number, forecast error correlation length scale and an observation error scale factor. Experiments have been carried out which demonstrate that a Rayleigh number of 30 times the critical Rayleigh number produces better geomagnetic forecasts than lower values, with an Ekman number of E = 1.25 × 10-6, which produces a modified magnetic Reynolds number within the parameter domain with an `Earth like' geodynamo. The optimal forecast error correlation length scale is found to be around 90 per cent of the thickness of the outer core, indicating a significant bias in the forecasts. Geomagnetic forecasts are also found to be highly sensitive to estimates of modelled observation errors: Errors that are too small do not lead to the gradual reduction in forecast error with time that is generally expected in a data assimilation system while observation errors that are too large lead to model divergence. Finally, we show that assimilation of L ? 3 (or large scale) gauss coefficients can help to improve forecasts of the L > 5 (smaller scale) coefficients, and that these improvements are the result of corrections to the velocity field in the geodynamo model.
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
Verification of unfold error estimates in the unfold operator code
NASA Astrophysics Data System (ADS)
Fehl, D. L.; Biggs, F.
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.
Error estimates for CCMP ocean surface wind data sets
NASA Astrophysics Data System (ADS)
Atlas, R. M.; Hoffman, R. N.; Ardizzone, J.; Leidner, S.; Jusem, J.; Smith, D. K.; Gombos, D.
2011-12-01
The cross-calibrated, multi-platform (CCMP) ocean surface wind data sets are now available at the Physical Oceanography Distributed Active Archive Center from July 1987 through December 2010. These data support wide-ranging air-sea research and applications. The main Level 3.0 data set has global ocean coverage (within 78S-78N) with 25-kilometer resolution every 6 hours. An enhanced variational analysis method (VAM) quality controls and optimally combines multiple input data sources to create the Level 3.0 data set. Data included are all available RSS DISCOVER wind observations, in situ buoys and ships, and ECMWF analyses. The VAM is set up to use the ECMWF analyses to fill in areas of no data and to provide an initial estimate of wind direction. As described in an article in the Feb. 2011 BAMS, when compared to conventional analyses and reanalyses, the CCMP winds are significantly different in some synoptic cases, result in different storm statistics, and provide enhanced high-spatial resolution time averages of ocean surface wind. We plan enhancements to produce estimated uncertainties for the CCMP data. We will apply the method of Desroziers et al. for the diagnosis of error statistics in observation space to the VAM O-B, O-A, and B-A increments. To isolate particular error statistics we will stratify the results by which individual instruments were used to create the increments. Then we will use cross-validation studies to estimate other error statistics. For example, comparisons in regions of overlap for VAM analyses based on SSMI and QuikSCAT separately and together will enable estimating the VAM directional error when using SSMI alone. Level 3.0 error estimates will enable construction of error estimates for the time averaged data sets.
Drug treatment of inborn errors of metabolism: a systematic review
Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed
2013-01-01
Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field. PMID:23532493
NASA Astrophysics Data System (ADS)
Gutierrez, Mauricio; Brown, Kenneth
2015-03-01
Classical simulations of noisy stabilizer circuits are often used to estimate the threshold of a quantum error-correcting code (QECC). It is common to model the noise as a depolarizing Pauli channel. However, it is not clear how sensitive a code's threshold is to the noise model, and whether or not a depolarizing channel is a good approximation for realistic errors. We have shown that, at the physical single-qubit level, efficient and more accurate approximations can be obtained. We now examine the feasibility of employing these approximations to obtain better estimates of a QECC's threshold. We calculate the level-1 pseudo-threshold for the Steane [[7,1,3
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model. PMID:26306296
Estimating errors in IceBridge freeboard at ICESat Scales
NASA Astrophysics Data System (ADS)
Prado, D. W.; Xie, H.; Ackley, S. F.; Wang, X.
2014-12-01
The Airborne Topographic Mapping (ATM) system flown on NASA Operation IceBridge allows for estimation of sea ice thickness from surface elevations in the Bellingshausen - Amundsen Seas. The estimation of total freeboard is based on the accuracy of local sea level estimations and the footprint size. We used the high density of ATM L1B (~1 m footprint) observations at varying spatial resolutions to assess errors associated with averaging over larger footprints and deviation of local sea level from the WGS-84 geoid over longer segment lengths The ATM data sets allow for a comparison between IceBridge (2009-2014) and ICESat(2003-2009)derived freeboards by comparing the ATM L2 (~70 m footprint) data, similar to the IceSAT footprint. While The average freeboard estimates for the L2 data in 2009 underestimate total freeboard by only 5 cm at 5 km segment lengths the error increases to 49 cm at the 50 km segment lengths typical of IceSAT analyses. Since the error in freeboard estimation greatly increases at the segment lengths used for IceSAT analyses, some caution may be required in comparing IceSAT thickness estimates with later IceBridge estimates over the same region.
NASA Astrophysics Data System (ADS)
Lamoreaux, Steve; Wong, Douglas
2015-06-01
The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented.
Wu Yan; Shannon, Mark A. [Department of Mechanical and Industrial Engineering, University of Illinois at Urbana-Champaign, 1206 West Green Street, Urbana, Illinois 61801 (United States)
2006-04-15
The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Non-Systematic Errors of Monthly Oceanic Rainfall Derived From TMI
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Chang, Alfred T.-C.
2000-01-01
A major objective of the Tropical Rainfall Measuring Mission (TRMM) is to produce a multi-year time series of monthly rainfall over 50 latitude by 50 longitude boxes with an uncertainty of 1 mm/day for low rain rates and 10% for high rain rates. Based on some simple assumptions about the error structure, we compute the non-systematic errors of monthly oceanic rainfall over the same space/time domain derived from data taken by the Special Sensor Microwave Imager (SSM/I) on board the Defense Meteorological Satellite Program (DMSP) satellites and TRMM Microwave Imager (TMI). The mean rain rates over a two-year period (1998-1999) are calculated to be 3.0, 2.85, 2.94 mm/day for SSM/I onboard the DMSP F-13, F-14 and TMI, respectively. Assuming that the non-systematic errors for each sensor are independent, the errors are calculated to be 22.2%, 22.4% and 19.7% for F-13, F-14 and for TMI, respectively. The non-systematic error for the TMI is smaller than that for either F-13 or F-14 SSM/I at the low rain rates but is comparable at rain rates higher than about 5 mm/day. The TRMM objective of 1 mm/day for non-systematic error is met by TMI for rain rates up to 5-6 mm/day. For higher rain rates, the nonsystematic error is in the 15% range. The goal of a 10% error for high rain rates may be realized by a combination of sensor measurements from multiple satellites, such as that advocated by the Global Precipitation Mission (GPM).
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Error propagation and scaling for tropical forest biomass estimates.
Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando
2004-01-01
The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093
Bayes Error Estimation Using Parzen and k-NN Procedures
Keinosuke Fukunaga; Donald M. Hummels
1987-01-01
The use of k nearest neighbor (k-NN) and Parzen density estimates to obtain estimates of the Bayes error is investigated under limited design set conditions. By drawing analogies between the k-NN and Parzen procedures, new procedures are suggested, and experimental results are given which indicate that these procedures yield a significant improvement over the conventional k-NN and Parzen procedures. We
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2015-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Concise Formulas for the Standard Errors of Component Loading Estimates.
ERIC Educational Resources Information Center
Ogasawara, Haruhiko
2002-01-01
Derived formulas for the asymptotic standard errors of component loading estimates to cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. Used the formulas with a real correlation matrix of 355 subjects who took 12 psychological tests. (SLD)
MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS
Hartmann, Ralf
# Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coe in the field of uncertainty quantification, to estimate the error in the computed quantities. In recent years and e#cient computation of single target quantities. The current ap proaches are based on computing
ESTIMATING THE NUMBER OF UNDETECTED ERRORS: BAYESIAN MODEL SELECTION
Basu, Sanjib
ESTIMATING THE NUMBER OF UNDETECTED ERRORS: BAYESIAN MODEL SELECTION Sanjib Basu and Nader Ebrahimi in the require ments and design stages of the development process. To improve the reliability of a software) and Basu (1997) developed different Bayesian models. These different Bayesian models yield different
Transient L1 error estimates for well-balanced
Transient L1 error estimates for well-balanced schemes on non-resonant scalar balance laws Debora" (sezione di Roma) Via dei Taurini, 19 - 00185 Rome, Italy Abstract The ability of Well-Balanced (WB) schemes to capture very accurately steady-state regimes of non-resonant hyperbolic systems of balance laws
Modeling Radar Rainfall Estimation Uncertainties: Random Error Model
AghaKouchak, Amir
Modeling Radar Rainfall Estimation Uncertainties: Random Error Model A. AghaKouchak1 ; E. Habib2 ; and A. Bárdossy3 Abstract: Precipitation is a major input in hydrological models. Radar rainfall data compared with rain gauge measurements provide higher spatial and temporal resolutions. However, radar data
Lower Bounds on Estimator Error and the Threshold Effect
Intrator, Nathan
parameter estimation. The Cram`er-Rao bound is simple to compute and approximates the actual error under the Cram`er-Rao bound for high signal-to-noise ratio conditions but are significantly tighter than . . . . . . . . . . . . . . . . . . 7 1.5 The Threshold Effect . . . . . . . . . . . . . . . . . . . . . . . 9 2 The Cram`er-Rao Bound
Error estimates for orthogonal matching pursuit and random dictionaries
Wojtaszczyk, Przemyslaw
Error estimates for orthogonal matching pursuit and random dictionaries Pawel Bechler Warsaw) for random dictionaries. We concentrate on dictio- naries satisfying the Restricted Isometry Property. We with overwhelming probability for random dictionaries used in compressed sensing. For these dictionaries we obtain
Ridge Regression Estimation Approach to Measurement Error Model
Shalabh
the estimators for multicollinear data is an important problem in the literature. Several approaches have been coefficient becomes biased as well as inconsistent in the presence of 1 #12;measurement errors in the data is violated, the prob- lem of multicollinearity enters into the data and it inflates the variance of ordinary
Error estimation and adaptive mesh refinement for aerodynamic flows
Hartmann, Ralf
Error estimation and adaptive mesh refinement for aerodynamic flows Ralf Hartmann, Joachim Held-oriented mesh refinement for single and multiple aerodynamic force coefficients as well as residual-based mesh refinement applied to various three-dimensional lam- inar and turbulent aerodynamic test cases defined
Estimation of Discretization Errors Using the Method of Nearby Problems
Roy, Chris
~ = numerical solution ^ = estimated exact solution to differential equation I. Introduction SOURCES of error, which is recommended for second-order differential equations. This approach relies on the generation's equation as well as a form of Burgers's equation with a nonlinear viscosity variation. Nomenclature ai
Robust estimation in the linear model with asymmetric error distributions
J. R. Collins; J. N. Sheahan; Z. Zheng
1986-01-01
In the linear model Xn - 1 = Cn - p[theta]p - 1 + En - 1, Huber's theory of robust estimation of the regression vector [theta]p - 1 is adapted for two models for the partially specified common distribution F of the i.i.d. components of the error vector En - 1. In the first model considered, the restriction of
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Error Estimates for the Approximation of the Effective Hamiltonian
Camilli, Fabio [Univ. dell'Aquila, Dip. di Matematica Pura e Applicata (Italy)], E-mail: camilli@ing.univaq.it; Capuzzo Dolcetta, Italo [Univ. di Roma 'La Sapienza', Dip. di Matematica (Italy)], E-mail: capuzzo@mat.uniroma1.it; Gomes, Diogo A. [Instituto Superior Tecnico, Departamento de Matematica (Portugal)], E-mail: dgomes@math.ist.utl.pt
2008-02-15
We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting.
Pointwise Error Estimates For Relaxation Approximations to Conservation Laws
c fl1998 001 Pointwise Error Estimates For Relaxation Approximations to Conservation Laws Eitan nonlinear systems of conservation laws was first studied by Liu [6], who justified some nonlinear stability was analyzed for the nonlinear systems by Chen, Levermore and Liu [1]. Consult [12] for a bird's eye view
Error estimates for universal back-projection-based photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pandey, Prabodh K.; Naik, Naren; Munshi, Prabhat; Pradhan, Asima
2015-07-01
Photo-acoustic tomography is a hybrid imaging modality that combines the advantages of optical as well as ultrasound imaging techniques to produce images with high resolution and good contrast at high penetration depths. Choice of reconstruction algorithm as well as experimental and computational parameters plays a major role in governing the accuracy of a tomographic technique. Therefore error estimates with the variation of these parameters have extreme importance. Due to the finite support, that photo-acoustic source has, the pressure signals are not band-limited, but in practice, our detection system is. Hence the reconstructed image from ideal, noiseless band-limited forward data (for future references we will call this band-limited reconstruction) is the best approximation that we have for the unknown object. In the present study, we report the error that arises in the universal back-projection (UBP) based photo-acoustic reconstruction for planer detection geometry due to sampling and filtering of forward data (pressure signals).Computational validation of the error estimates have been carried out for synthetic phantoms. Validation with noisy forward data has also been carried out, to study the effect of noise on the error estimates derived in our work. Although here we have derived the estimates for planar detection geometry, the derivations for spherical and cylindrical geometries follow accordingly.
Gross error detection and stage efficiency estimation in a separation process
Serth, R.W.; Srikanth, B. . Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. . Dept. of Chemical and Process Engineering)
1993-10-01
Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.
Systematic errors in optical-flow velocimetry for turbulent flows and flames
Long, Marshall B.
Optical Society of America OCIS codes: 120.7250, 150.4620, 120.1740. 1. Introduction The measurement of inhomogeneity should not present difficulties in turbulent flows as long as the measured scalar field exhibitsSystematic errors in optical-flow velocimetry for turbulent flows and flames Joseph Fielding
Analysis of systematic errors of the ASM/RXTE monitor and GT-48 ?-ray telescope
NASA Astrophysics Data System (ADS)
Fidelis, V. V.
2011-06-01
The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 ?-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg ?-2, according to the Crimean version) were used and the stationary nature of its ?-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 ?-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Error estimates and specification parameters for functional renormalization
Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)
2013-07-15
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Model error estimation in composite impact response prediction using hierarchical Bayes networks
NASA Astrophysics Data System (ADS)
Salas Mendez, Pablo Antonio
Predicting the failure response of complex systems often requires computational models that can capture the nonlinear response of the material and structure across multiple scales. Typically, the output response is a direct result of the complex interactions of different phenomena at different scales of the hierarchical system. Therefore, computed model errors correspond to accumulated model errors that have been propagated across several levels of the system. The objective of the current work is to identify and quantify the errors introduced by computer analytical models at different scales in the ballistic impact response simulation of a composite laminate. To that end, a Bayesian network based framework was implemented to systematically estimate the model contribution of uncertainty to the response prediction at each sub-scale of the composite problem. The developed method can be used for optimal allocation of validation resources by determining the type and number of experimental tests needed to reduce uncertainty at different subsystems levels of large engineering systems.
Hall, William L; Larkin, Gregory L; Trujillo, Mauricio J; Hinds, Jackie L; Delaney, Kathleen A
2004-10-01
To examine biases in weight estimation by Emergency Department (ED) providers and patients, a convenience sample of ED providers (faculty, residents, interns, nurses, medical students, paramedics) and patients was studied. Providers (n = 33), blinded to study hypothesis and patient data, estimated their own weight as well as the weight of 11-20 patients each. An independent sample of patients (n = 95) was used to assess biases in patients' estimation of their own weight. Data are represented as over, under, or within +/- 5 kg, the dose tolerance standard for thrombolytics. Logistic regression analysis revealed that patients are almost nine times more likely to accurately estimate their own weight than providers; yet 22% of patients were unable to estimate their own weight within 5 kg. Of all providers, paramedics were significantly worse estimators of patient weight than other providers. Providers were no better at guessing their own weight than were patients. Though there was no systematic estimate bias by weight, experience level (except paramedic), or gender for providers, those providers under 30 years of age were significantly better estimators of patient weight than older providers. Although patient gender did not create a bias in provider estimation accuracy, providers were more likely to underestimate women's weights than men's. In conclusion, patient self-estimates of weight are significantly better than estimates by providers. Inaccurate estimates by both groups could potentially contribute to medication dosing errors in the ED. PMID:15388205
Random and systematic beam modulator errors in dynamic intensity modulated radiotherapy
NASA Astrophysics Data System (ADS)
Parsai, Homayon; Cho, Paul S.; Phillips, Mark H.; Giansiracusa, Robert S.; Axen, David
2003-05-01
This paper reports on the dosimetric effects of random and systematic modulator errors in delivery of dynamic intensity modulated beams. A sliding-widow type delivery that utilizes a combination of multileaf collimators (MLCs) and backup diaphragms was examined. Gaussian functions with standard deviations ranging from 0.5 to 1.5 mm were used to simulate random positioning errors. A clinical example involving a clival meningioma was chosen with optic chiasm and brain stem as limiting critical structures in the vicinity of the tumour. Dose calculations for different modulator fluctuations were performed, and a quantitative analysis was carried out based on cumulative and differential dose volume histograms for the gross target volume and surrounding critical structures. The study indicated that random modulator errors have a strong tendency to reduce minimum target dose and homogeneity. Furthermore, it was shown that random perturbation of both MLCs and backup diaphragms in the order of ? = 1 mm can lead to 5% errors in prescribed dose. In comparison, when MLCs or backup diaphragms alone was perturbed, the system was more robust and modulator errors of at least ? = 1.5 mm were required to cause dose discrepancies greater than 5%. For systematic perturbation, even errors in the order of +/-0.5 mm were shown to result in significant dosimetric deviations.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Motion estimation performance models with application to hardware error tolerance
NASA Astrophysics Data System (ADS)
Cheong, Hye-Yeon; Ortega, Antonio
2007-01-01
The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.
NASA Astrophysics Data System (ADS)
Maddox, S. J.; Efstathiou, G.; Sutherland, W. J.
1995-01-01
We present measurements of the angular two-point galaxy correlation function, omega(theta), from the APM Galaxy Survey. The performance of various estimators of omega(theta) is assessed by analyzing simulated galaxy catalogues. We use these tests, and analytic arguments, to select estimators which are least affected by large-scale gradients in the galaxy counts correlated with the survey boundaries. An error analysis of the plate matching procedure in the APM Galaxy Survey shows that residual plate-to-plate errors do not bias our estimates of omega(theta) by more than approx. 1 x 10-3. A direct comparison between our photometry and external CCD photometry of over 13,000 galaxies from the Las Campanas Deep Redshift Survey shows that the rms error in the APM plate zero points lies in the range 0.04-0.05 magnitudes, in agreement with our previous estimates. The comparison with the CCD photometry sets tight limits on any variation of the magnitude scale with right ascension. We find no evidence for any systematic errors in the survey correlated with the date of scanning and exposure. We estimate the effects on omega(theta) of atmospheric extinction and obscuration by dust in our Galaxy and conclude that these are negligible. There is no evidence for any correlations between the errors in the survey and limiting magnitude except at the faintest magnitudes of the survey, bj greater than 20, where star-galaxy classification begins to break down introducing plate-to-plate variations in the completeness of the survey. We use our best estimates of the systematic errors in the survey to calculate corrected estimates of omega(theta). Deep redshift surveys are used to determine the selection function of the APM Galaxy Survey, i.e. the probability that a galaxy at redshift z is included in the sample at a given magnitude limit. The selection function is applied in Limber's equation to compute how omega(theta) scales as a function of limiting magnitude. Our estimates of omega(theta) are in excellent agreement with the scaling relation, providing further evidence that systematic errors in the APM survey are small. We explicitly remove large-scale structure by applying filters to the APM galaxy maps and conclude that there is still strong evidence for more clustering at large scales than predicted by the standard scale-invariant Cold Dark Matter (CDM) model. We compare the APM omega(theta) and the three dimensional power spectrum derived by inverting omega(theta), with the predictions of scale-invariant CDM models. We show that the observations require Gamma = Omega0h in the range 0.2-0.3 and are incompatible with the value Omega = 0.5 of the standard CDM model.
Use of refractive direct ophthalmoscopy for estimation of refractive error.
Sodhi, P K; Pandey, R M; Ratan, S K
2005-04-01
The purpose of this study was to find if direct ophthalmoscopy, a simple technique, could be used to give an approximate value of the refractive correction for a patient. This would shorten the time and lessen the effort to be expended during the following retinoscopic examination done for finding the patient's refractive correction. The use of direct ophthalmoscopy for this specific purpose is especially desirable where retinoscopic examination is quite tedious, e.g. uncooperative patients like children, bed-ridden patients and mentally retarded subjects, in patients with a large central corneal opacity and in patients having a large refractive error. The study was divided into two phases. In phase I, refractive direct ophthalmoscopy followed by classical retinoscopy was done for 92 subjects (184 eyes) in the age group of 11-35 years. The method of regression analysis was used to find a regression equation relating the readings to refractive error determined by the two above techniques. In phase II of study, the refractive correction needed for 50 other subjects in the similar age group was estimated using this regression equation by inserting their respective direct ophthalmoscopy readings. Then, these estimated values and classical retinoscopic examination values were compared. The refractive error determined after retinoscopy and that derived from regression equation (incorporating direct ophthalmoscopy readings) was statistically comparable (t = 0.52, p = 0.60). The correlation coefficient (r value) between the two methods was 0.37. Direct ophthalmoscopic lens reading can be used to give a fairly accurate estimate of refractive error in a patient's eye by using a linear regression equation, which relates these two examination techniques. The magnitude of astigmatic error, however, cannot be obtained. PMID:15853857
Biases in the estimation of transfer function prediction errors
NASA Astrophysics Data System (ADS)
Telford, R. J.; Andersson, C.; Birks, H. J. B.; Juggins, S.
2004-12-01
In the quest for more precise sea-surface temperature reconstructions from microfossil assemblages, large modern training sets and new transfer function methods have been developed. Realistic estimates of the predictive power of a transfer function can only be calculated from an independent test set. If the test set is not fully independent, the error estimate will be artificially low. We show that the modern analogue technique using a similarity index (SIMMAX) and the revised analogue method (RAM), both derived from the modern analogue technique, achieve apparently lower root mean square error of prediction (RMSEP) by failing to ensure statistical independence of samples during cross validation. We also show that when cross validation is used to select the best artificial neural network or modern analogue model, the RMSEP based on cross validation is lower than that for a fully independent test set.
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
GPS/DR Error Estimation for Autonomous Vehicle Localization
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
Internal errors: a valid alternative for clustering estimates?
NASA Astrophysics Data System (ADS)
Arnalte-Mur, Pablo; Norberg, Peder
2014-05-01
We investigate the validity of internal methods to estimate the uncertainty of the galaxy two-point correlation function. We consider the jackknife and bootstrap methods, which are based on re-sampling sub-regions of the original data. These are cheap computationally, and do not depend on the accuracy of external simulations. We test the different methods over a large range of scales using a set of 160 mock catalogues from the LasDamas set of simulations. Our results show that the standard bootstrap method significantly overestimates the true uncertainty at all scales. We try two possible generalisations of the bootstrap, but find them not to be robust. Regarding jackknife, we obtain that this method provides an unbiased estimation of the error at small and intermediate scales, up to ~ 40 h -1, Mpc. At larger scales, it typically overestimates the error by a ~13%.
Stress Recovery and Error Estimation for Shell Structures
NASA Technical Reports Server (NTRS)
Yazdani, A. A.; Riggs, H. R.; Tessler, A.
2000-01-01
The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.
NASA Technical Reports Server (NTRS)
Esbensen, S. K.; Chelton, D. B.; Vickers, D.; Sun, J.
1993-01-01
The method proposed by Liu (1984) is used to estimate monthly averaged evaporation over the global oceans from 1 yr of special sensor microwave imager (SDSM/I) data. Intercomparisons involving SSM/I and in situ data are made over a wide range of oceanic conditions during August 1987 and February 1988 to determine the source of errors in the evaporation estimates. The most significant spatially coherent evaporation errors are found to come from estimates of near-surface specific humidity, q. Systematic discrepancies of over 2 g/kg are found in the tropics, as well as in the middle and high latitudes. The q errors are partitioned into contributions from the parameterization of q in terms of the columnar water vapor, i.e., the Liu q/W relationship, and from the retrieval algorithm for W. The effects of W retrieval errors are found to be smaller over most of the global oceans and due primarily to the implicitly assumed vertical structures of temperature and specific humidity on which the physically based SSM/I retrievals of W are based.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
NASA Astrophysics Data System (ADS)
La Brecque, D. J.
2006-12-01
For decades, resistivity and induced polarization (IP) measurements have been important tools for near-surface geophysical investigations. Recently, sophisticated, multi-channel, multi-electrode, acquisition systems have displaced older, simpler, systems allowing collection of large, complex, three-dimensional data series. Generally, these new digital acquisition systems are better than their analog ancestors at dealing with noise from external sources. However, they are prone to a number of systematic errors. Since these errors are non- random and repeatable, the field geophysicist may be blissfully unaware that while his/her field data may be very precise, they may not be particularly accurate. We have begun the second phase of research project to improve our understanding of these types of errors. The objective research is not to indict any particular manufacturer's instrument but to understand the magnitude of systematic errors in typical, modern, data acquisition. One important source of noise, results from the tendency for these systems to both send the source current, and monitor potentials through common multiplexer circuits and along the same cable bundle. Often, the source current is transmitted at hundreds of volts and the potentials measure few tens of millivolts. Thus, even tiny amounts of leakage from the transmitter wires/circuits to the receiver wire/circuits can corrupt or overwhelm the data. For example, in a recent survey, we found that a number of substantial anomalies correlated better to the multi-conductor cable used than to the subsurface. Leakage errors in cables are roughly proportional to the length of the cable and the contact impedance of the electrodes but vary dramatically the construction and type of wire insulation. Polyvinylchloride, (PVC) insulation, the type used in most inexpensive wire and cables, is extremely noisy. Not only does PVC tend to leak current from conductor to conductor, but the leakage currents tend to have large phase shifts/time lags that mimic IP effects. A second source of substantial systematic errors is the tendency of these systems to use the same, simple metal electrodes as current sources for some data and receiver points at other times. Using the electrode as a current source results in the electrode retaining substantial voltage (often hundreds of millivolts) that decays over time. The form of this decay voltage can be fairly complex making it difficult to remove even with long periods of signal averaging. Finally, there are a number of other, smaller but potentially significant systematic errors such as errors due to the limited common-mode rejection of the multi-channel receivers and even leakage of potential from receiver to receiver when electrodes are shared between adjacent measurement channels.
NASA Technical Reports Server (NTRS)
Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.
2007-01-01
Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.
Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
NASA Astrophysics Data System (ADS)
Wakisaka, Ryo; Saruwatari, Hiroshi; Shikano, Kiyohiro; Takatani, Tomoya
In this paper, we introduce a generalized minimum mean-square error short-time spectral amplitude estimator with a new prior estimation of the speech probability density function based on moment-cumulant transformation. From the objective and subjective evaluation experiments, we show the improved noise reduction performance of the proposed method.
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Systematic error sources in a measurement of G using a cryogenic torsion pendulum
NASA Astrophysics Data System (ADS)
Cross, William Daniel
This dissertation attempts to explore and quantify systematic errors that arise in a measurement of G (the gravitational constant from Newton's Law of Gravitation) using a cryogenic torsion pendulum. It begins by exploring the techniques frequently used to measure G with a torsion pendulum, features of the particular method used at UC Irvine, and the motivations behind those features. It proceeds to describe the particular apparatus used in the UCI G measurement, and the formalism involved in a gravitational torsion pendulum experiment. It then describes and quantifies the systematic errors that have arisen, particularly those that arise from the torsion fiber and from the influence of ambient background gravitational, electrostatic, and magnetic fields. The dissertation concludes by presenting the value of G that the lab has reported.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Ehernberger, L. J.
1985-01-01
The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.
The effect of systematic errors on the hybridization of optical critical dimension measurements
NASA Astrophysics Data System (ADS)
Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.
2015-06-01
In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.
Treatment of systematic errors II: Fusion of ultrasound and visual sensor data
Beckerman, M.; Farkas, L.A.; Johnston, S.E.
1990-06-01
In this work we present a methodology for the fusion of ultrasound and visual sensor data as acquired by a mobile robot. The objective of the methodology was the reduction of systematic errors which arise in the processing of the data in the individual sensor domains. In the initial processing of the ultrasound scan, rectilinear (Cartesian map) and polar (strings) data structures were built. In the initial processing of the CCD camera image, vertical edge segments were identified and labeled according to their connectivity. The systematic errors treated included ultrasound distortions in size, and visual ambiguities in discriminating depth discontinuities from intensity gradients generated by other details in the image. These systematic errors were first flagged by comparing the ultrasound strings and visual vertical edges to one another. The ranges, spatial orientation of the camera, and geometric information extracted from least-squares fits were then used in the fusion stage processing of the visual image. Vertical edge information was used in the subsequent fusion stage processing of the ultrasound data. By the end of this feedback-like fusion process the data structures in each sensor domain carried some information from the other domain. We had identified the vertical edges of interest, tagged them with range information, and removed the distortions from the Cartesian navigation maps. 32 refs., 11 figs.
Estimating the coverage of mental health programmes: a systematic review
De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram
2014-01-01
Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Effects of measurement error on estimating biological half-life
Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. (Centers for Disease Control, Atlanta, GA (United States))
1992-10-01
Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.
Improved Soundings and Error Estimates using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2006-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long
2001-01-01
This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models
NASA Astrophysics Data System (ADS)
Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin
2014-05-01
Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more successful CGCM performance, as the effects of remote biases may override them. Therefore, efforts to reduce CGCM errors cannot be narrowly focused on particular regions.
NASA Astrophysics Data System (ADS)
Xie, J.; Zhu, J.; Yan, C.
2006-07-01
The Array for Real-time Geostrophic Oceanography (ARGO) project creates a unique opportunity to estimate the absolute velocity at mid-depths of the global oceans. However, the estimation can only be made based on float surface trajectories. The diving and resurfacing positions of the float are not available in its trajectory file. This surface drifting effect makes it difficult to estimate mid-depth current. Moreover, the vertical shear during decent or ascent between parking depth and the surface is another major error source. In this presentation, we first quantify the contributions of the two major error sources using the current estimates from Estimating the Climate and Circulation of the Ocean (ECCO) and find that the surface drifting is a primary error source. Then, a sequential surface trajectory prediction/estimation scheme based on Kalman Filter is introduced and implemented to reduce the surface drifting error in the Pacific during November 2001 to October 2004. On average, the error of the estimated velocities is greatly reduced from 2.7 to 0.2 cm s if neglecting the vertical shear. These velocities with relative error less than 25% are analyzed and compared with previous studies on mid-depth currents. The current system derived from ARGO floats in Pacific at 1000 and 2000 dB is comparable to other measured by ADCP (Reid, 1997; Firing et al., 1998). This presentation is based on two submitted manuscripts of the same authors (Xie and Zhu, 2006; Zhu et al., 2006). More detailed results can be found in the two manuscripts.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Interventions to reduce dosing errors in children: a systematic review of the literature.
Conroy, Sharon; Sweis, Dimah; Planner, Claire; Yeung, Vincent; Collier, Jacqueline; Haines, Linda; Wong, Ian C K
2007-01-01
Children are a particularly challenging group of patients when trying to ensure the safe use of medicines. The increased need for calculations, dilutions and manipulations of paediatric medicines, together with a need to dose on an individual patient basis using age, gestational age, weight and surface area, means that they are more prone to medication errors at each stage of the medicines management process. It is already known that dose calculation errors are the most common type of medication error in neonatal and paediatric patients. Interventions to reduce the risk of dose calculation errors are therefore urgently needed. A systematic literature review was conducted to identify published articles reporting interventions; 28 studies were found to be relevant. The main interventions found were computerised physician order entry (CPOE) and computer-aided prescribing. Most CPOE and computer-aided prescribing studies showed some degree of reduction in medication errors, with some claiming no errors occurring after implementation of the intervention. However, one study showed a significant increase in mortality after the implementation of CPOE. Further research is needed to investigate outcomes such as mortality and economics. Unit dose dispensing systems and educational/risk management programmes were also shown to reduce medication errors in children. Although it is suggested that 'smart' intravenous pumps can potentially reduce infusion errors in children, there is insufficient information to draw a conclusion because of a lack of research. Most interventions identified were US based, and since medicine management processes are currently different in different countries, there is a need to interpret the information carefully when considering implementing interventions elsewhere. PMID:18035864
NASA Astrophysics Data System (ADS)
Coakley, K. J.; Dewey, M. S.; Yue, A. T.; Laptev, A. B.
2009-12-01
Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.
Effect of the first day correction on systematic setup error reduction
Wu Qiuwen; Lockman, David; Wong, John; Yan Di [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, Michigan, 48073 (United States)
2007-05-15
Treatment simulation is usually performed with a conventional simulator using kV X-rays or with a computed tomography (CT) simulator before the treatment course begins. The purpose is to verify patient setup under the same conditions as for treatment planning. Systematic (preparation) setup errors can be introduced by this process. The purpose of this study is to characterize the setup errors using electronic portal image (EPI) analyses and to propose a method to reduce the systematic component by performing simulation and patient preparation on the treatment machine. In this study, the first four or five days EPIs were analyzed from a total of 533 prostate cancer patients who were simulated on conventional simulators. We characterized setup errors using four parameters: (M({mu}{sub i}),{sigma}({mu}{sub i}),RMS({sigma}{sub i}),{sigma}({sigma}{sub i})), where {mu}{sub i} and {sigma}{sub i} are individual patient mean and standard deviation, M, {sigma}, and RMS are the mean, standard deviation, and root-mean-square of underlying variables ({mu}{sub i} and {sigma}{sub i}). We have performed a simulation of removing systematic components by correcting the first day setup error. As a comparison, we also carried out a similar analyses for patients simulated on a CT simulator and patients treated on a linac with an on-board kV CT imaging system, although a limited number of patients were available in these two samples. We found that {sigma}({mu}{sub i})=(2.6,3.4,2.4) mm, and RMS({sigma}{sub i})=(1.5,1.9,1.0) mm in lateral, anterior/posterior, and cranial/caudal directions, indicating that systematic errors are much larger than random errors. Strong correlations were found between measurement on the first day and {mu}{sub i}, implying the first day's measurement is a good predictor for {mu}{sub i}. The same parameters were also computed for days 2-4, with and without the first day correction. Without correction, M({mu}{sub i}){sub 2-4}=(0.7,1.6,-1.0) mm, and {sigma}({mu}{sub i}){sub 2-4}=(2.6,3.5,2.4) mm. With correction, M({mu}{sub i}){sub 2-4}=(0.0,0.4,0.4) mm, much closer to zero, and {sigma}({mu}{sub i}){sub 2-4}=(1.8,2.2,1.2) mm, also much smaller. While the use of a CT simulator can reduce the systematic errors, the benefits of first day correction can still be observed, although at a smaller magnitude. Therefore, the systematic setup error can be significantly reduced if the patient is marked and fields are verified on the treatment machine on the first fraction, preferably with an on-board kV imaging system.
Zhu, Fangqiang; Hummer, Gerhard
2012-02-01
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354
Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations
NASA Astrophysics Data System (ADS)
Cartwright, Keigh
2014-10-01
To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
NASA Technical Reports Server (NTRS)
Parrott, T. L.; Smith, C. D.
1977-01-01
The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.
The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors
Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)
2010-07-15
Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.
Error estimators for advection-reaction-diffusion equations based on the solution of local problems
NASA Astrophysics Data System (ADS)
Araya, Rodolfo; Behrens, Edwin; Rodriguez, Rodolfo
2007-09-01
This paper deals with a posteriori error estimates for advection-reaction-diffusion equations. In particular, error estimators based on the solution of local problems are derived for a stabilized finite element method. These estimators are proved to be equivalent to the error, with equivalence constants eventually depending on the physical parameters. Numerical experiments illustrating the performance of this approach are reported.
ANALYTICAL BIOCHEMISTRY 191, 110-118 (1990) Estimation of Protein Secondary Structure and Error
van Stokkum, Ivo
1990-01-01
ANALYTICAL BIOCHEMISTRY 191, 110-118 (1990) Estimation of Protein Secondary Structure and Error to the estimation of the error in the secondary structure estimates. It is shown that the linear model is only of the estimate whose fractions of secondary structure summate to approximately one. Comparing the estimation from
Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.
1983-01-01
Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors
NASA Astrophysics Data System (ADS)
Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric
2015-05-01
We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is “flagged” by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.
NASA Astrophysics Data System (ADS)
Ellerbroek, B.
2013-04-01
Context. The wavefront aberrations due to optical surface errors in adaptive optics systems and science instruments can be a significant error source for high precision astrometry. Aims: This report derives formulas for evaluating these errors which may be useful in developing astrometry error budgets and optical surface quality specifications. Methods: A Fourier domain approach is used, and the errors on each optical surface are modeled as "phase screens" with stationary statistics at one or several conjugate ranges from the optical system pupil. Three classes of error are considered: (i) errors in initially calibrating the effects of static surface errors; (ii) the effects of beam translation, or "wander," across optical surfaces due to (for example) instrument boresighting error; and (iii) quasistatic surface errors which change from one observation to the next. Results: For each of these effects, we develop formulas describing the position estimation errors in a single observation of a science field, as well as the differential error between two separate observations. Sample numerical results are presented for the three classes of error, including some sample computations for the Thirty Meter Telescope and the NFIRAOS first-light adaptive optics system.
An estimate of error for the CCAMLR 2000 survey estimate of krill biomass
NASA Astrophysics Data System (ADS)
Demer, David A.
2004-06-01
Combined sampling and measurement error is estimated for the CCAMLR 2000 acoustic estimate of krill abundance in the Scotia Sea. First, some potential sources of uncertainty in generic echo-integration surveys are reviewed. Then, specific to the CCAMLR 2000 survey, some of the primary sources of measurement error is explored. The error in system calibration is evaluated in relation to the effects of variations in water temperature and salinity on sound speed, sound absorption, and acoustic-beam characteristics. Variation in krill target strength is estimated using a distorted-wave Born approximation model fitted with measured distributions of animal lengths and orientations. The variable effectiveness of two-frequency species classification methods is also investigated using the same scattering model. Most of these components of measurement uncertainty are frequency-dependent and covariant. Ultimately, the total random error in the CCAMLR 2000 acoustic estimate of krill abundance is estimated from a Monte Carlo simulation which assumes independent estimates of krill biomass are derived from acoustic backscatter measurements at three frequencies (38, 120, and 200 kHz). The overall coefficient of variation ( 10.2?CV?11.6%; 95% CI) is not significantly different from the sampling variance alone (CV=11.4%). That is, the measurement variance is negligible relative to the sampling variance due to the large number of measurements averaged to derive the ultimate biomass estimate. Some potential sources of bias (e.g., stemming from uncertainties in the target strength model, the krill length-to-weight model, the species classification method, bubble attenuation, signal thresholding, and survey area definition) may be more appreciable components of measurement uncertainty.
Error estimation for CFD aeroheating prediction under rarefied flow condition
NASA Astrophysics Data System (ADS)
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ? is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ?, compared with two other parameters, Kn? and Ma?Kn?.
Sampling errors in rainfall estimates by multiple satellites
NASA Technical Reports Server (NTRS)
North, Gerald R.; Shen, Samuel S. P.; Upson, Robert
1993-01-01
This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.
Effects of errors-in-variables on weighted least squares estimation
NASA Astrophysics Data System (ADS)
Xu, Peiliang; Liu, Jingnan; Zeng, Wenxian; Shen, Yunzhong
2014-07-01
Although total least squares (TLS) is more rigorous than the weighted least squares (LS) method to estimate the parameters in an errors-in-variables (EIV) model, it is computationally much more complicated than the weighted LS method. For some EIV problems, the TLS and weighted LS methods have been shown to produce practically negligible differences in the estimated parameters. To understand under what conditions we can safely use the usual weighted LS method, we systematically investigate the effects of the random errors of the design matrix on weighted LS adjustment. We derive the effects of EIV on the estimated quantities of geodetic interest, in particular, the model parameters, the variance-covariance matrix of the estimated parameters and the variance of unit weight. By simplifying our bias formulae, we can readily show that the corresponding statistical results obtained by Hodges and Moore (Appl Stat 21:185-195, 1972) and Davies and Hutton (Biometrika 62:383-391, 1975) are actually the special cases of our study. The theoretical analysis of bias has shown that the effect of random matrix on adjustment depends on the design matrix itself, the variance-covariance matrix of its elements and the model parameters. Using the derived formulae of bias, we can remove the effect of the random matrix from the weighted LS estimate and accordingly obtain the bias-corrected weighted LS estimate for the EIV model. We derive the bias of the weighted LS estimate of the variance of unit weight. The random errors of the design matrix can significantly affect the weighted LS estimate of the variance of unit weight. The theoretical analysis successfully explains all the anomalously large estimates of the variance of unit weight reported in the geodetic literature. We propose bias-corrected estimates for the variance of unit weight. Finally, we analyze two examples of coordinate transformation and climate change, which have shown that the bias-corrected weighted LS method can perform numerically as well as the weighted TLS method.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Mitigating Systematic Errors in Angular Correlation Function Measurements from Wide Field Surveys
Morrison, Christopher Brian
2015-01-01
We present an investigation into the effects of survey systematics such as varying depth, point spread function (PSF) size, and extinction on the galaxy selection and correlation in photometric, multi-epoch, wide area surveys. We take the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) as an example. Variations in galaxy selection due to systematics are found to cause density fluctuations of up to 10% for some small fraction of the area for most galaxy redshift slices and as much as 50% for some extreme cases of faint high-redshift samples. This results in correlations of galaxies against survey systematics of order $\\sim$1% when averaged over the survey area. We present an empirical method for mitigating these systematic correlations from measurements of angular correlation functions using weighted random points. These weighted random catalogs are estimated from the observed galaxy over densities by mapping these to survey parameters. We are able to model and mitigate the effect of systematic correl...
A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations
Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.
1999-11-03
An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.
CTER—Rapid estimation of CTF parameters with error assessment
Penczek, Pawel A.; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M.T.
2014-01-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. PMID:24562077
CTER-rapid estimation of CTF parameters with error assessment.
Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T
2014-05-01
In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. PMID:24562077
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvo?á?ek, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Sojka, Falko; Meissner, Matthias; Zwick, Christian; Forker, Roman; Fritz, Torsten
2013-01-01
We developed and implemented an algorithm to determine and correct systematic distortions in low-energy electron diffraction (LEED) images. The procedure is in principle independent of the design of the apparatus (spherical or planar phosphorescent screen vs. channeltron detector) and is therefore applicable to all device variants, known as conventional LEED, micro-channel plate LEED, and spot profile analysis LEED. The essential prerequisite is a calibration image of a sample with a well-known structure and a suitably high number of diffraction spots, e.g., a Si(111)-7×7 reconstructed surface. The algorithm provides a formalism which can be used to rectify all further measurements generated with the same device. In detail, one needs to distinguish between radial and asymmetric distortion. Additionally, it is necessary to know the primary energy of the electrons precisely to derive accurate lattice constants. Often, there will be a deviation between the true kinetic energy and the value set in the LEED control. Here, we introduce a method to determine this energy error more accurately than in previous studies. Following the correction of the systematic errors, a relative accuracy of better than 1% can be achieved for the determination of the lattice parameters of unknown samples. PMID:23387699
Systematic intensity errors and model imperfection as the consequence of spectral truncation
Rousseau; Maes; Lenstra
2000-05-01
The wavelength dispersion delta lambda/lambda in a graphite (002) monochromated Mo K alpha beam was analyzed. A wavelength window was found with 0.68 < lambda < 0.79 A, i.e. delta lambda/lambda = 0.14. The very large dispersion leads to systematic errors in Iobserved(H) caused by scan-angle-induced spectral truncation. A limit on the scan angle during data collection is unavoidable, in order that an omega/2 theta measurement should not encompass neighboring reflections. The systematic intensity errors increase with the Bragg angle. Therefore they influence the refined X-ray structure by adding a truncational component to the temperature factor: B(X-ray) = B(true) + B(truncation). For an Mo tube at 50 kV, we find B(truncation) = 0.05 A2, whereas a value of 0.22 A2 applies to the same tube but operated at 25 kV. The values of B(truncation) are temperature independent. The model bias was verified via a series of experimental data collections on spherical crystals of nickel sulfate hexahydrate and ammonium hydrogen tartrate. Monochromatic reference structures were obtained via a synchrotron experiment and via a 'balanced' tube experiment. PMID:10851594
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvo?ek, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
On GPS Water Vapour estimation and related errors
NASA Astrophysics Data System (ADS)
Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha
2010-05-01
Water vapour (WV) is one of the most important constituents of the atmosphere: it plays a crucial role in the earth's radiation budget in the absorption processes both of the incoming shortwave and the outgoing longwave radiation; it is one of the main greenhouse gases of the atmosphere, by far the one with higher concentration. In addition moisture and latent heat are transported through the WV phase, which is one of the driving factor of the weather dynamics, feeding the cloud systems evolution. An accurate, dense and frequent sampling of WV at different scales, is consequently of great importance for climatology and meteorology research as well as operational weather forecasting. Since the development of the satellite positioning systems, it has been clear that the troposphere and its WV content were a source of delay in the positioning signal, in other words a source of error in the positioning process or in turn a source of information in meteorology. The use of the GPS (Global Positioning System) signal for WV estimation has increased in recent years, starting from measurements collected from a ground-fixed dual frequency GPS geodetic station. This technique for processing the GPS data is based on measuring the signal travel time in the satellite-receiver path and then processing such signal to filter out all delay contributions except the tropospheric one. Once the troposheric delay is computed, the wet and dry part are decoupled under some hypotheses on the tropospheric structure and/or through ancillary information on pressure and temperature. The processing chain normally aims at producing a vertical Integrated Water Vapour (IWV) value. The other non troposheric delays are due to ionospheric free electrons, relativistic effects, multipath effects, transmitter and receiver instrumental biases, signal bending. The total effect is a delay in the signal travel time with respect to the geometrical straight path. The GPS signal has the advantage to be nearly costless and practically continuous (every second) with respect to the atmospheric dynamics. The spatial resolution is correlated to the number and spatial distance (i.e. density) of ground fixed stations and in principle can be very high (for sure it is increasing). The problem can reside in the errors made in the decoupling of the various delay components and in the approximation assumed for the computation of the IWV from the wet delay component. Such errors often are "masked" by the use of the available software packages for GPS data processing and, as a consequence, it is easier to find, associated to the final WV products, errors given from a posteriori validation processes rather than derived from rigorous error propagation analyses. In this work we want to present a technique to compute the different components necessary to retrieve WV measurements from the GPS signal, with a critical analysis of all approximations and errors made in the processing procedure also in perspectives of the great opportunity that the European GALILEO system will bring in this field too.
A posteriori error estimators for the Stokes equations II non-conforming discretizations
R. Verfiirth; Angewandte Mathematik
1991-01-01
Summary We present an a posteriori error estimator for the non-conforming Crouzeix-Raviart discretization of the Stokes equations which is based on the local evaluation of residuals with respect to the strong form of the differential equation. The error estimator yields global upper and local lower bounds for the error of the finite element solution. It can easily be generalized to
NASA Astrophysics Data System (ADS)
Hacker, Joshua; Lee, Jared; Lei, Lili
2014-05-01
Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.
Adaptive error covariances estimation methods for ensemble Kalman filters
NASA Astrophysics Data System (ADS)
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry-Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry-Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry-Sauer's method on L-96 example.
Kriging regression of PIV data using a local error estimate
NASA Astrophysics Data System (ADS)
de Baar, Jouke H. S.; Percin, Mustafa; Dwight, Richard P.; van Oudheusden, Bas W.; Bijl, Hester
2014-01-01
The objective of the method described in this work is to provide an improved reconstruction of an original flow field from experimental velocity data obtained with particle image velocimetry (PIV) technique, by incorporating the local accuracy of the PIV data. The postprocessing method we propose is Kriging regression using a local error estimate (Kriging LE). In Kriging LE, each velocity vector must be accompanied by an estimated measurement uncertainty. The performance of Kriging LE is first tested on synthetically generated PIV images of a two-dimensional flow of four counter-rotating vortices with various seeding and illumination conditions. Kriging LE is found to increase the accuracy of interpolation to a finer grid dramatically at severe reflection and low seeding conditions. We subsequently apply Kriging LE for spatial regression of stereo-PIV data to reconstruct the three-dimensional wake of a flapping-wing micro air vehicle. By qualitatively comparing the large-scale vortical structures, we show that Kriging LE performs better than cubic spline interpolation. By quantitatively comparing the interpolated vorticity to unused measurement data at intermediate planes, we show that Kriging LE outperforms conventional Kriging as well as cubic spline interpolation.
Errors of Remapping of Radar Estimates onto Cartesian Coordinates
NASA Astrophysics Data System (ADS)
Sharif, H. O.; Ogden, F. L.
2014-12-01
Recent upgrades to operational radar rainfall products in terms of quality and resolution call for re-examination of the factors that contribute to the uncertainty of radar rainfall estimation. Remapping or gridding of radar polar observations onto Cartesian coordinates is implemented using various methods, and is often applied when radar estimates are compared against rain gauge observations, in hydrologic applications, or for merging data from different radars. However, assuming perfect radar observations, many of the widely used remapping methodologies do not conserve mass for the rainfall rate field. Research has suggested that optimal remapping should select all polar bins falling within or intersecting a Cartesian grid and assign them weights based on the proportion of each individual bin's area falling within the grid. However, to reduce computational demand practitioners use a variety of approximate remapping approaches. The most popular approximate approaches used are those based on extracting information from radar bins whose centers fall within a certain distance from the center of the Cartesian grid. This paper introduces a mass-conserving method for remapping, which we call "precise remapping", and evaluates it by comparing against two other commonly used remapping methods based on areal weighting and distance. Results show that the choice of the remapping method can lead to large errors in grid-averaged rainfall accumulations.
Model Error Estimation for the CPTEC Eta Model
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; daSilva, Arlindo
1999-01-01
Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.
NASA Astrophysics Data System (ADS)
Müller, C.; Voss, K.-O.; Lang, M.; Neumann, R.
2003-12-01
Scanning force microscopy (SFM) is capable of imaging surfaces with resolution on a nanometer scale. This method therefore plays an important role in characterizing radiation-induced defects in solids complementing methods like transmission electron microscopy, small-angle X-ray scattering and optical spectroscopy, to name a few. In particular, the SFM inspection of ionic single-crystals irradiated with energetic heavy ions revealed minute hillocks. The aim to determine the size and shape of these ion tracks as a function of parameters such as energy loss gives rise to critically analyze the interaction between SFM probe tip and sample in order to recognize and take into account systematic errors. Such errors originate especially from the finite size of the sensor tip. This work presents both an uncomplicated model of the SFM imaging process and its experimental verification allowing one to quantify the influence of the tip geometry on the recorded micrographs and correct the resulting data accordingly. For this purpose, a computer program was developed, which is able firstly to determine the tip geometry by means of the known geometry of a calibration standard. Secondly, using this tip geometry, the program reproduces the original sample topography containing the radiation damage structures under study. This is illustrated representatively for artificially generated images and also for a sample micrograph recorded on the surface of U-irradiated CaF 2 to prove the efficiency of the suggested procedures. Afterwards, an existing set of images showing the calibration standard 2D200 (NANOSENSORS) is used to classify the average tip shape. Due to the fact that no large variations in this shape occur, the procedure of imaging the calibration standard for each measurement can be replaced by using this average tip for reconstruction. The article concludes with the elimination of systematic errors in existing data sets of hillock diameters recorded on LiF, CaF 2 and LaF 3 after irradiation with swift heavy ions.
Systematic bearing errors of HF signals observed over a North-South propagation path
NASA Astrophysics Data System (ADS)
Jones, T. B.; Warrington, E. M.; Perry, J. E.
1992-11-01
Strong gradients in electron density associated with sunrise and sunset are a well known feature of the ionosphere and will produce off great circle bearings, particularly if the propagation path is parallel to the dawn or dusk terminator. Measurements were undertaken of the direction of arrival of signals in the range 3 to 23 MHz radiated by a transmitter located at Clyde River in the Canadian Arctic (70 deg N, 70 deg W). The path from this transmitter to the receiving site at Boston, USA, is parallel to the dawn dusk line and consequently systematic changes in bearings are expected to occur. Bearings measured during January 1989 indicate a positive error of a few degrees at around sunrise. As the day progressed the error decreased, becoming zero at local noon at the path mid point. As the dusk approaches, the tilts in the ionosphere are reversed in gradient and there is a smaller negative swing in the mean bearing. The bearing error at dusk is smaller than at sunrise since the ionospheric gradients at this time are less steep. The diurnal swing in the bearing occurs during the winter and equinox periods but is absent (or very small) during summer. This is because the ionospheric gradients in summer are smaller than those at other seasons due to the relatively low values of the F-region critical frequency (foF2) which occur during the summer daytime.
An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors
NASA Technical Reports Server (NTRS)
Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg
2011-01-01
The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.
Systematic Uncertainties in Stellar Mass Estimation for Distinct Galaxy Populations
Sheila J. Kannappan; Eric Gawiser
2007-01-26
We show that different stellar-mass estimation methods yield overall mass scales that disagree by factors up to ~2 for the z=0 galaxy population, and more importantly, relative mass scales that sometimes disagree by factors >~3 between distinct classes of galaxies (spiral/irregular types, classical E/S0s, and E/S0s whose colors reflect recent star formation). This comparison considers stellar mass estimates based on (a) two different calibrations of the correlation between K-band mass-to-light ratio and B-R color (Bell et al., Portinari et al.) and (b) detailed fitting of UBRJHK photometry and optical spectrophotometry using two different population synthesis models (Bruzual-Charlot, Maraston), with the same initial mass function in all cases. We also compare stellar+gas masses with dynamical masses. This analysis offers only weak arguments for preferring a particular stellar-mass estimation method, given the plausibility of real variations in dynamical properties and dark matter content. These results help to calibrate the systematic uncertainties inherent in mass-based evolutionary studies of galaxies, including comparisons of low and high redshift galaxies.
Identify challenges in addressing measurement error when modeling multivariate dietary variables such as diet quality indices. Describe statistical modeling techniques to correct for measurement error in estimating multivariate dietary variables.
ERIC Educational Resources Information Center
Tawney, D. A.
1972-01-01
Suggests the level at which errors could be treated in sixth form physics (England) and then describes two design exercises in which rough estimations of errors are used in the making of design decisions. (Author/PR)
mBEEF: an accurate semi-local Bayesian error estimation density functional.
Wellendorff, Jess; Lundgaard, Keld T; Jacobsen, Karsten W; Bligaard, Thomas
2014-04-14
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations. PMID:24735288
mBEEF: An accurate semi-local Bayesian error estimation density functional
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas
2014-04-01
We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.
An error estimation procedure for plate bending elements
NASA Technical Reports Server (NTRS)
Dow, John O.; Byrd, Doyle E.
1988-01-01
Procedures for identifying and eliminating errors inherent in individual finite elements and those due to the discretization of the continuum are presented. The elemental errors are identified through the use of an element formulation procedure based on physically interpretable strain gradient interpolation functions. The use of physically interpretable notation allows these errors to be eliminated using rational arguments. The discretization errors are identified by comparing the finite-element solution with a smoothed superconvergent solution. The errors thus identified are used to guide an adaptive mesh refinement procedure which produces improved results.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION
GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.
1998-06-22
The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.
NASA Technical Reports Server (NTRS)
Lang, Christapher G.; Bey, Kim S. (Technical Monitor)
2002-01-01
This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.
Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra
J. B. Whitmore; M. T. Murphy
2014-11-18
We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10 m s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100 \\AA) and entire spectrographs arms ($\\sim$1000--3000 \\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200 m s$^{-1}$ per 1000 \\AA. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, $\\alpha$. The spurious deviations in $\\alpha$ produced by the model closely match important aspects of the VLT--UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in $\\alpha$ from quasar absorption lines.
Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra
NASA Astrophysics Data System (ADS)
Whitmore, Jonathan B.; Murphy, Michael T.
2015-02-01
We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, ?. The spurious deviations in ? produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in ? from quasar absorption lines.
NASA Technical Reports Server (NTRS)
De Lapparent, V.; Kurtz, M. J.; Geller, M. J.
1986-01-01
Residual errors in the Selder et al. (SSGP) map which caused a break in both the correlation factor (CF) and the filamentary appearance of the Shane-Wirtanen map are examined. These errors, causing a residual rms fluctuation of 11 percent in the SSGP-corrected counts and a systematic rms offset of 8 percent in the mean count per plate, can be attributed to counting pattern and plate vignetting. Techniques for CF reconstruction in catalogs affected by plate-related systematic biases are examined, and it is concluded that accurate restoration may not be possible. Surveys designed to measure the CF at the depth of the SW counts on a scale of 2.5 deg, must have systematic errors of less than or about 0.04 mag.
Brandon Miller; Richard O'Shaughnessy; Tyson B. Littenberg; Ben Farr
2015-06-19
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic followup facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the tradeoff between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (STF2), in parameter estimation with lalinference_mcmc. Though efficient, the STF2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically-motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the STF2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic followup: whether the secondary is has a mass consistent with a neutron star; whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.
NASA Astrophysics Data System (ADS)
Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.
2015-08-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.
X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors
Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.
2010-07-09
Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.
Witte, Marnix G.; Geer, Joris van der; Schneider, Christoph; Lebesque, Joos V.; Alber, Markus; Herk, Marcel van [Department of Radiation Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Amsterdam (Netherlands); Sektion fuer Biomedizinische Physik, Universitaetsklinik fuer Radioonkologie, Universitaet Tuebingen (Germany); Department of Radiation Oncology, Netherlands Cancer Institute, Antoni van Leeuwenhoek Hospital, Amsterdam (Netherlands)
2007-09-15
The purpose of this work was the development of a probabilistic planning method with biological cost functions that does not require the definition of margins. Geometrical uncertainties were integrated in tumor control probability (TCP) and normal tissue complication probability (NTCP) objective functions for inverse planning. For efficiency reasons random errors were included by blurring the dose distribution and systematic errors by shifting structures with respect to the dose. Treatment plans were made for 19 prostate patients following four inverse strategies: Conformal with homogeneous dose to the planning target volume (PTV), a simultaneous integrated boost using a second PTV, optimization using TCP and NTCP functions together with a PTV, and probabilistic TCP and NTCP optimization for the clinical target volume without PTV. The resulting plans were evaluated by independent Monte Carlo simulation of many possible treatment histories including geometrical uncertainties. The results showed that the probabilistic optimization technique reduced the rectal wall volume receiving high dose, while at the same time increasing the dose to the clinical target volume. Without sacrificing the expected local control rate, the expected rectum toxicity could be reduced by 50% relative to the boost technique. The improvement over the conformal technique was larger yet. The margin based biological technique led to toxicity in between the boost and probabilistic techniques, but its control rates were very variable and relatively low. During evaluations, the sensitivity of the local control probability to variations in biological parameters appeared similar for all four strategies. The sensitivity to variations of the geometrical error distributions was strongest for the probabilistic technique. It is concluded that probabilistic optimization based on tumor control probability and normal tissue complication probability is feasible. It results in robust prostate treatment plans with an improved balance between local control and rectum toxicity, compared to conventional techniques.
Shirasaki, Masato
2013-01-01
Measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in measurement of weak lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurement, photometric redshift errors, and shear calibration correction. We first generate mock weak lensing catalogues that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform the Fisher analysis using the large set of mock catalogues for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. Subaru Hyper Suprime-Cam (HSC) survey with the sky coverage of ~1400 deg2 will constrain the dark energy equation of state parameter with an error of Delta w_0 ~ 0.25 by the lensing MFs alone, but biases induced by the syst...
Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors
NASA Astrophysics Data System (ADS)
Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.
2013-05-01
GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.
NASA Astrophysics Data System (ADS)
Park, Young-Ho; Lee, Soo Heyong; Park, Sang Eon; Lee, Jong Koo; Kang, Hoonsoo; Lee, Ho Seong; Kwon, Taeg Yong
2009-06-01
Systematic errors due to evanescent field inside an E-plane type Ramsey cavity are corrected by introducing an extended square pulse profile in our Ramsey spectral analysis. This procedure is crucial for the proper operation of thermal beam based atomic frequency standards with short cavities. By accounting for the spatial extent of the field inside the cavity, the method enables a correct calculation of transition probability for atoms at the interaction region. The authors demonstrate that the systematic errors in quadratic Doppler and end-to-end cavity phase shifts can be reduced by an order of magnitude.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
FOR UNKNOWN--BUT--BOUNDED ERRORS, INTERVAL ESTIMATES ARE OFTEN BETTER THAN AVERAGING
Kreinovich, Vladik
not have any information about the probabilities of different values of error. Methods of statistics enable as '' or \\Gamma'', but we do not have any information about the probabilities of different values of error. SinceFOR UNKNOWN--BUT--BOUNDED ERRORS, INTERVAL ESTIMATES ARE OFTEN BETTER THAN AVERAGING G. William
Fluctuations of refractivity as a systematic error source in radio occultations
NASA Astrophysics Data System (ADS)
Gorbunov, Michael E.; Vorob'ev, Valery V.; Lauritsen, Kent B.
2015-07-01
The fact that fluctuations of refractivity may result in a systematic negative shift of the phase of waves propagating in a random medium is known for a long time. Tatarskii was the first to reveal it, and von Eshleman put this into the context of the radio occultation sounding of planetary atmospheres. In this paper, we show that this effect may also be one of the causes of the negative bias of refractivity retrieved for radio occultation observations of the Earth's atmosphere. We perform theoretical estimates of this effect based on the Rytov approximation. These estimates, however, do not consider the regular refraction, which may significantly change the magnitude of this effect. We perform numerical simulations of radio occultations, based on the Kolmogorov-von Kármán isotropic spectrum of refractivity fluctuations, with the internal and external scales and magnitude tuned so as to reproduce the realistic level of the variance of retrieved refractivity and the amplitude fluctuations of the modeled signals. The model of the regular atmosphere is based on analyses of the European Centre for Medium-Range Weather Forecasts. We show that it is possible to set up a vertical profile of the structural constant of the fluctuation spectrum such that it will result in a systematic shift and variances of the retrieved refractivity consistent with those observed for COSMIC measurements.
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.
2004-01-01
One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.
NASA Astrophysics Data System (ADS)
Ammons, S. M.; Neichel, Benoit; Lu, Jessica; Gavel, Donald T.; Srinath, Srikar; McGurk, Rosalie; Rudy, Alex; Rockosi, Connie; Marois, Christian; Macintosh, Bruce; Savransky, Dmitry; Galicher, Raphael; Bendek, Eduardo; Guyon, Olivier; Marin, Eduardo; Garrel, Vincent; Sivo, Gaetano
2014-08-01
We measure the long-term systematic component of the astrometric error in the GeMS MCAO system as a function of field radius and Ks magnitude. The experiment uses two epochs of observations of NGC 1851 separated by one month. The systematic component is estimated for each of three field of view cases (15'' radius, 30'' radius, and full field) and each of three distortion correction schemes: 8 DOF/chip + local distortion correction (LDC), 8 DOF/chip with no LDC, and 4 DOF/chip with no LDC. For bright, unsaturated stars with 13 < Ks < 16, the systematic component is < 0.2, 0.3, and 0.4 mas, respectively, for the 15'' radius, 30'' radius, and full field cases, provided that an 8 DOF/chip distortion correction with LDC (for the full-field case) is used to correct distortions. An 8 DOF/chip distortion-correction model always outperforms a 4 DOF/chip model, at all field positions and magnitudes and for all field-of-view cases, indicating the presence of high-order distortion changes. Given the order of the models needed to correct these distortions (~8 DOF/chip or 32 degrees of freedom total), it is expected that at least 25 stars per square arcminute would be needed to keep systematic errors at less than 0.3 milliarcseconds for multi-year programs. We also estimate the short-term astrometric precision of the newly upgraded Shane AO system with undithered M92 observations. Using a 6-parameter linear transformation to register images, the system delivers ~0.3 mas astrometric error over short-term observations of 2-3 minutes.
Processor Allocation for Tasks that is Robust Against Errors in Computation Time Estimates
Maciejewski, Anthony A. "Tony"
Processor Allocation for Tasks that is Robust Against Errors in Computation Time Estimates Prasanna of a resource allocation (mapping) be robust against errors in task computation time estimates. The problem resources to tasks to maximize the robustness of the allocation. This study focuses on this aspect
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
On Global Error Estimation and Control of Finite Difference Solutions for Parabolic Equations
Kristian Debrabant; Jens Lang
2009-01-01
The aim of this paper is to extend the global error estimation and control addressed in Lang and Verwer [SIAM J. Sci. Comput. 29, 2007] for initial value problems to finite difference solutions of parabolic partial differential equations. The classical ODE approach based on the first variational equation is combined with an estimation of the PDE spatial truncation error to
AN ERROR ESTIMATE FOR VISCOUS APPROXIMATE SOLUTIONS OF DEGENERATE PARABOLIC EQUATIONS
Soatto, Stefano
AN ERROR ESTIMATE FOR VISCOUS APPROXIMATE SOLUTIONS OF DEGENERATE PARABOLIC EQUATIONS STEINAR EVJE (strongly) degenerate parabolic equations, we present a direct proof of an L 1 error estimate for viscous are weak solutions of the initial value problem for the uniformly parabolic equation @ t w " + div V (x
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
Autonomous error bounding of position estimates from GPS and Galileo
Temple, Thomas J. (Thomas John)
2006-01-01
In safety-of-life applications of satellite-based navigation, such as the guided approach and landing of an aircraft, the most important question is whether the navigation error is tolerable. Although differentially corrected ...
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
NASA Astrophysics Data System (ADS)
Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.
2013-08-01
Radio occultation (RO) sensing is used to probe the earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25-30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem, we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001-2011. We could observe that the nighttime bending angle bias stays constant over the whole period of 11 yr, while the daytime bias increases from low to high solar activity. As a result, the difference between nighttime and daytime bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of daytime RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this simulation we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.
NASA Astrophysics Data System (ADS)
Danzer, Julia; Scherllin-Pirscher, Barbara; Foelsche, Ulrich
2013-04-01
Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05?rad to -0.4?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.
NASA Astrophysics Data System (ADS)
Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.
2013-02-01
Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 yr, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.
An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.
ERIC Educational Resources Information Center
De Ayala, R. J.; And Others
Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…
Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors
NASA Astrophysics Data System (ADS)
Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice
2014-03-01
Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.
Kim, K.
1996-08-01
The accuracy of the diagnosis obtained from a nuclear power plant fault-diagnostic advisor using neural networks is addressed in this paper in order to ensure the credibility of the diagnosis. A new error estimation scheme called error estimation by series association provides a measure of the accuracy associated with the advisor`s diagnoses. This error estimation is performed by a secondary neural network that is fed both the input features for and the outputs of the advisor. The error estimation by series association outperforms previous error estimation techniques in providing more accurate confidence information with considerably reduced computational requirements. The authors demonstrate the extensive usability of their method by applying it to a complicated transient recognition problem of 33 transient scenarios. The simulated transient data at different severities consists of 25 distinct transients for the Duane Arnold Energy Center nuclear power station ranging from a main steam line break to anticipated transient without scram (ATWS) conditions. The fault-diagnostic advisor system with the secondary error prediction network is tested on the transients at various severity levels and degraded noise conditions. The results show that the error estimation scheme provides a useful measure of the validity of the advisor`s output or diagnosis with considerable reduction in computational requirements over previous error estimation schemes.
ERIC Educational Resources Information Center
Shoemaker, David M.
Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)
Comer, K.; Gaddy, C.D.; Seaver, D.A.; Stillwell, W.G.
1985-01-01
The US Nuclear Regulatory Commission and Sandia National Laboratories sponsored a project to evaluate psychological scaling techniques for use in generating estimates of human error probabilities. The project evaluated two techniques: direct numerical estimation and paired comparisons. Expert estimates were found to be consistent across and within judges. Convergent validity was good, in comparison to estimates in a handbook of human reliability. Predictive validity could not be established because of the lack of actual relative frequencies of error (which will be a difficulty inherent in validation of any procedure used to estimate HEPs). Application of expert estimates in probabilistic risk assessment and in human factors is discussed.
Estimation of finite population parameters with auxiliary information and response error.
González, L M; Singer, J M; Stanek, E J
2014-10-01
We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error. PMID:25089123
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; Fuente, Luisa de la; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Brown, Richard J C
2009-08-26
The use of a sequential standard addition calibration (S-SAC) can introduce systematic errors into measurements results. Whilst this error for the determination of blank-corrected solutions has previously been described, no similar treatment has been available for the quantification of analyte mass fraction in blank solutions--a crucial first step in any analytical procedure. This paper presents the theory describing the measurement of blank solutions using S-SAC, derives the correction that needs to be applied following analysis, and demonstrates the systematic error that occurs if this correction is not applied. The relative magnitudes of this bias and the precision of extrapolated measurements values are also considered. PMID:19646577
Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Basis set limit and systematic errors in local-orbital based all-electron DFT
NASA Astrophysics Data System (ADS)
Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias
2006-03-01
With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).
Reducing systematic errors in time-frequency resolved mode number analysis
Horváth, L; Papp, G; Maraschek, M; Schuhbeck, K H; Pokol, G I
2015-01-01
The present paper describes the effect of magnetic pick-up coil transfer functions on mode number analysis in magnetically confined fusion plasmas. Magnetic probes mounted inside the vacuum chamber are widely used to characterize the mode structure of magnetohydrodynamic modes, as, due to their relative simplicity and compact nature, several coils can be distributed over the vessel. Phase differences between the transfer functions of different magnetic pick-up coils lead to systematic errors in time- and frequency resolved mode number analysis. This paper presents the first in-situ, end-to-end calibration of a magnetic pick-up coil system which was carried out by using an in-vessel driving coil on ASDEX Upgrade. The effect of the phase differences in the pick-up coil transfer functions is most significant in the 50-250 kHz frequency range, where the relative phase shift between the different probes can be up to 1 radian (~60{\\deg}). By applying a correction based on the transfer functions we found smaller res...
Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy
NASA Astrophysics Data System (ADS)
Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid
2015-07-01
Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.
Goal-oriented explicit residual-type error estimates in XFEM
NASA Astrophysics Data System (ADS)
Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin
2013-08-01
A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.
NASA Technical Reports Server (NTRS)
Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.
2011-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.
Doolan, P; Dias, M; Collins Fekete, C; Seco, J
2014-06-01
Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)
Accounting for uncertainty in systematic bias in exposure estimates used in relative risk regression
Gilbert, E.S.
1995-12-01
In many epidemiologic studies addressing exposure-response relationships, sources of error that lead to systematic bias in exposure measurements are known to be present, but there is uncertainty in the magnitude and nature of the bias. Two approaches that allow this uncertainty to be reflected in confidence limits and other statistical inferences were developed, and are applicable to both cohort and case-control studies. The first approach is based on a numerical approximation to the likelihood ratio statistic, and the second uses computer simulations based on the score statistic. These approaches were applied to data from a cohort study of workers at the Hanford site (1944-86) exposed occupationally to external radiation; to combined data on workers exposed at Hanford, Oak Ridge National Laboratory, and Rocky Flats Weapons plant; and to artificial data sets created to examine the effects of varying sample size and the magnitude of the risk estimate. For the worker data, sampling uncertainty dominated and accounting for uncertainty in systematic bias did not greatly modify confidence limits. However, with increased sample size, accounting for these uncertainties became more important, and is recommended when there is interest in comparing or combining results from different studies.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
NASA Technical Reports Server (NTRS)
Martin, D. L.; Perry, M. J.
1994-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches
Mitchell A. Petersen
2009-01-01
In corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms or across time, and OLS standard errors can be biased. Historically, researchers in the two literatures have used different solutions to this problem. This paper examines the different methods used in the literature and
Estimating and Correcting Global Weather Model Error CHRISTOPHER M. DANFORTH
Maryland at College Park, University of
by time averaging the errors over several years), the periodic (seasonal and diurnal) component. The diurnal correction (based on the leading EOFs of the analysis corrections) is also successful. State. These deficiencies include inaccurate forcings and parameterizations used to represent the effect of subgrid
Estimating Precipitation Errors Using Spaceborne Surface Soil Moisure Retrievals
Technology Transfer Automated Retrieval System (TEKTRAN)
Limitations in the availability of ground-based rain gauge data currently hamper our ability to quantify errors in global precipitation products over data-poor areas of the world. Over land, these limitations may be eased by approaches based on interpreting the degree of dynamic consistency existin...
January, 2007 Estimating Standard Errors in Finance Panel Data Sets
Aazhang, Behnaam
on the same firm in different years (e.g. bid-ask spread regressed on exchange dummies, stock price finance and asset pricing empirical work, researchers are often confronted with panel data. In these data finance has relied on clustered standard errors, while asset pricing has used the Fama-MacBeth procedure
On the averaging principle for one-frequency systems. Seminorm estimates for the error
Carlo Morosi; Livio Pizzocchero
2008-12-03
We extend some previous results of our work [1] on the error of the averaging method, in the one-frequency case. The new error estimates apply to any separating family of seminorms on the space of the actions; they generalize our previous estimates in terms of the Euclidean norm. For example, one can use the new approach to get separate error estimates for each action coordinate. An application to rigid body under damping is presented. In a companion paper [2], the same method will be applied to the motion of a satellite around an oblate planet.
Recovery-based error estimator for stabilized finite element methods ...
Lina Song
2014-01-28
Available online 16 January 2014 ... 11171269, 11201369 and 11201254) and PhD Programs Foundation of Ministry of Education of China (Grant ... (1) Our numerical results show that the recovery-based estimator is more accurate than the residual estimator .... such that G?rh? approximates r better than rh in some norm.
Error estimation and adaptive mesh refinement for parallel analysis of shell structures
NASA Technical Reports Server (NTRS)
Keating, Scott C.; Felippa, Carlos A.; Park, K. C.
1994-01-01
The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.
Numerical experiments on the efficiency of local grid refinement based on truncation error estimates
Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos
2015-01-01
Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...
Sensitivity and error analysis algorithms for combined estimation and control systems
G. W. CARLOCK; A. P. SAGE
1975-01-01
Sew algorithms for sensitivity and error analysis of combined estimation and control systems are presented. The control structure is constrained a priori to be a fixed configuration type in which the closed-loop control is generated by the summation of a gain matrix multiplying the measured states and another gain matrix multiplying an estimate of the unmeasured states. The estimator is
Genetic algorithms based robust frequency estimation of sinusoidal signals with stationary errors
Kundu, Debasis
Genetic algorithms based robust frequency estimation of sinusoidal signals with stationary errors t In this paper, we consider the fundamental problem of frequency estimation of multiple sinusoidal signals for the frequency estimation problem. In the simulation studies and real life data analysis, it is observed
Nonparametric variance estimation in the analysis of microarray data: a measurement error approach
Wang, Yuedong
., 2002; Leung & Cav- alieri, 2003). One of the important problems in the analysis of microarray dataNonparametric variance estimation in the analysis of microarray data: a measurement error approach to inconsistent estimators. Nevertheless, the direct SIMEX method can reduce bias relative to a naive estimator
How well can we estimate error variance of satellite precipitation data around the world?
NASA Astrophysics Data System (ADS)
Gebregiorgis, Abebe S.; Hossain, Faisal
2015-03-01
Providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating the square difference prediction of satellite precipitation (hereafter used synonymously with "error variance") using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. Building on a suite of recent studies that have developed the error variance models, the goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as a function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the error variance of satellite precipitation products. The regression approach yielded good performance skill with high correlation between simulated and observed error variances. The correlation ranged from 0.46 to 0.98 during the independent validation period. In most cases (~ 85% of the scenarios), the correlation was higher than 0.72. The error variance models also captured the spatial distribution of observed error variance adequately for all study regions while producing unbiased residual error. The approach is promising for regions where missed precipitation is not a common occurrence in satellite precipitation estimation. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.
Estimating True Score in the Compound Binomial Error Model
ERIC Educational Resources Information Center
Wilcox, Rand R.
1978-01-01
Several Bayesian approaches to the simultaneous estimation of the means of k binomial populations are discussed. This has particular applicability to criterion-referenced or mastery testing. (Author/JKS)
Mesoscale predictability and background error convariance estimation through ensemble forecasting
Ham, Joy L
2002-01-01
Over the past decade, ensemble forecasting has emerged as a powerful tool for numerical weather prediction. Not only does it produce the best estimate of the state of the atmosphere, it also could quantify the uncertainties ...
Effect of Berkson measurement error on parameter estimates in Cox regression models.
Küchenhoff, Helmut; Bender, Ralf; Langner, Ingo
2007-06-01
We study the effect of additive and multiplicative Berkson measurement error in Cox proportional hazard model. By plotting the true and the observed survivor function and the true and the observed hazard function dependent on the exposure one can get ideas about the effect of this type of error on the estimation of the slope parameter corresponding to the variable measured with error. As an example, we analyze the measurement error in the situation of the German Uranium Miners Cohort Study both with graphical methods and with a simulation study. We do not see a substantial bias in the presence of small measurement error and in the rare disease case. Even the effect of a Berkson measurement error with high variance, which is not unrealistic in our example, is a negligible attenuation of the observed effect. However, this effect is more pronounced for multiplicative measurement error. PMID:17401682
Dmitry M. Golubchikov; Konstantin E. Rumiantsev
2010-01-01
Stages of process of keys generation in quantum key distribution systems are reviewed. Impact of error correction and privacy amplification to the rate of private keys generation is estimated. Dependence of rate of private key generation to raw key bit-error probability is found.
Robust Estimation When More Than One Variable Per Equation of Condition Has Error
Jefferys, William
hand side as a function of "independent variables," that is, parameters, on the right. ThusRobust Estimation When More Than One Variable Per Equation of Condition Has Error by William H when more than one variable in an equation of condition has error, the results will be affected
ERROR ESTIMATES FOR FINITE DIFFERENCE METHODS FOR A WIDE-ANGLE `PARABOLIC' EQUATION
Akrivis, Georgios
ERROR ESTIMATES FOR FINITE DIFFERENCE METHODS FOR A WIDE-ANGLE `PARABOLIC' EQUATION G. D. AKRIVIS-value problem for a third-order p.d.e., a wide-angle `parabolic' equation frequently used in underwater. wide-angle `parabolic' equation, Underwater Acoustics, finite difference error esti- mates, interface
Quantifying the error in estimated transfer functions with application to model order selection
Graham C. Goodwin; Michel Gevers; Brett Ninness
1992-01-01
Previous results on estimating errors or error bounds on identified transfer functions have relied on prior assumptions about the noise and the unmodeled dynamics. This prior information took the form of parameterized bounding functions or parameterized probability density functions, in the time or frequency domain with known parameters. It is shown that the parameters that quantify this prior information can
Scott C Reel; G Oran S Pong; J Ennifer L. S Ands; J Ay R Otella; J Anet Z Eigle; Lawrence J Oe; K Erry M. M Urphy; Douglas S Mith
Abstract Determining population sizes can be difficult, but is essential for conservation. By count- ing distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive sam- ples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park,
Multiclass Bayes error estimation by a feature space sampling technique
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.
1979-01-01
A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.
A Systematic Review of Software Development Cost Estimation Studies
topic, estimation approach, research approach, study context and data set. Based on the review, we of journals when completeness is essential, 3) Conduct more research on basic software cost estimation topics, 4) Conduct more studies of software cost estimation in real-life settings, 5) Conduct more studies
Schuckers, Michael E.
, Beta- binomial, Intra-cluster Correlation 1 Background Biometric Identification Devices (BID's) compareEstimation and sample size calculations for correlated binary error rates of biometric identification devices Michael E. Schuckers,211 Valentine Hall, Department of Mathematics Saint Lawrence
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Locatelli, R.
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...
Fading MIMO Relay Channels with Channel Estimation Error Bengi Aygun Alkan Soysal
Soysal, Alkan
Fading MIMO Relay Channels with Channel Estimation Error Bengi Ayg¨un Alkan Soysal Department.aygun@bahcesehir.edu.tr alkan.soysal@bahcesehir.edu.tr Abstract--In this paper, we consider a full-duplex, decode- and
Zollanvari, Amin
2012-02-14
Error estimation must be used to find the accuracy of a designed classifier, an issue that is critical in biomarker discovery for disease diagnosis and prognosis in genomics and proteomics. This dissertation is concerned with the analytical...
Improved estimates of coordinate error for molecular replacement
Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.
2013-01-01
The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21?000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates. PMID:24189232
Spatio-temporal Error on the Discharge Estimates for the SWOT Mission
NASA Astrophysics Data System (ADS)
Biancamaria, S.; Alsdorf, D. E.; Andreadis, K. M.; Clark, E.; Durand, M.; Lettenmaier, D. P.; Mognard, N. M.; Oudin, Y.; Rodriguez, E.
2008-12-01
The Surface Water and Ocean Topography (SWOT) mission measures two key quantities over rivers: water surface elevation and slope. Water surface elevation from SWOT will have a vertical accuracy, when averaged over approximately one square kilometer, on the order of centimeters. Over reaches from 1-10 km long, SWOT slope measurements will be accurate to microradians. Elevation (depth) and slope offer the potential to produce discharge as a derived quantity. Estimates of instantaneous and temporally integrated discharge from SWOT data will also contain a certain degree of error. Two primary sources of measurement error exist. The first is the temporal sub-sampling of water elevations. For example, SWOT will sample some locations twice in the 21-day repeat cycle. If these two overpasses occurred during flood stage, an estimate of monthly discharge based on these observations would be much higher than the true value. Likewise, if estimating maximum or minimum monthly discharge, in some cases, SWOT may miss those events completely. The second source of measurement error results from the instrument's capability to accurately measure the magnitude of the water surface elevation. How this error affects discharge estimates depends on errors in the model used to derive discharge from water surface elevation. We present a global distribution of estimated relative errors in mean annual discharge based on a power law relationship between stage and discharge. Additionally, relative errors in integrated and average instantaneous monthly discharge associated with temporal sub-sampling over the proposed orbital tracks are presented for several river basins.
Errors of level in spinal surgery: an evidence-based systematic review.
Longo, U G; Loppini, M; Romeo, G; Maffulli, N; Denaro, V
2012-11-01
Wrong-level surgery is a unique pitfall in spinal surgery and is part of the wider field of wrong-site surgery. Wrong-site surgery affects both patients and surgeons and has received much media attention. We performed this systematic review to determine the incidence and prevalence of wrong-level procedures in spinal surgery and to identify effective prevention strategies. We retrieved 12 studies reporting the incidence or prevalence of wrong-site surgery and that provided information about prevention strategies. Of these, ten studies were performed on patients undergoing lumbar spine surgery and two on patients undergoing lumbar, thoracic or cervical spine procedures. A higher frequency of wrong-level surgery in lumbar procedures than in cervical procedures was found. Only one study assessed preventative strategies for wrong-site surgery, demonstrating that current site-verification protocols did not prevent about one-third of the cases. The current literature does not provide a definitive estimate of the occurrence of wrong-site spinal surgery, and there is no published evidence to support the effectiveness of site-verification protocols. Further prevention strategies need to be developed to reduce the risk of wrong-site surgery. PMID:23109637
Systematic and large temperature errors in a dynamic downscaling of atmospheric flow
NASA Astrophysics Data System (ADS)
Massad, Andréa; Ólafsson, Haraldur; Nína Petersen, Guðrún; Ágústsson, Hálfdán; Rögnvaldsson, Ólafur
2015-04-01
Years of atmospheric flow over Iceland have been simulated with the WRF model, using boundaries from the ECMWF. In general, the flow is well reproduced, but there are still errors. By comparison with a multitude of observations, the largest errors have been analysed in terms of the physical or numerical processes that appear to go wrong. Many of the largest temperature errors are associated with an incorrect representation of the surface of the earth during the snow melting season. Another characteristic of large errors is the presence of misplaced large horizontal temperature gradients in coastal areas. Wrong vertical mixing gave surprisingly few large errors. There are some errors due to incorrect timing of incoming weather systems at the boundaries, but no large errors can be traced to wrongly reproduced temperature of airmasses advected into the area.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Energy Aware Sensor Group Scheduling to Minimise Estimated Error from Noisy Sensor Measurements
Siddeswara Mayura Guru; Suhinthan Maheswararajah
\\u000a In wireless sensor network applications, sensor measurements are corrupted by noise resulting from harsh environmental conditions,\\u000a hardware and transmission errors. Minimising the impact of noise in an energy constrained sensor network is a challenging\\u000a task. We study the problem of estimating environmental phenomena (e.g., temperature, humidity, pressure) based on noisy sensor\\u000a measurements to minimise the estimation error. An environmental phenomenon
Estimated errors in retrievals of ocean parameters from SSMIS
NASA Astrophysics Data System (ADS)
Mears, Carl A.; Smith, Deborah K.; Wentz, Frank J.
2015-06-01
Measurements made by microwave imaging radiometers can be used to retrieve several environmental parameters over the world's oceans. In this work, we calculate the uncertainty in retrievals obtained from the Special Sensor Microwave Imager Sounder (SSMIS) instrument caused by uncertainty in the input parameters to the retrieval algorithm. This work applies to the version 7 retrievals of surface wind speed, total column water vapor, total column cloud liquid water, and rain rate produced by Remote Sensing Systems. Our numerical approach allows us to calculate an estimated input-induced uncertainty for every valid retrieval during the SSMIS mission. Our uncertainty estimates are consistent with the differences observed between SSMIS wind speed and vapor measurements made by SSMIS on the F16 and F17 satellites, supporting their accuracy. The estimates do not explain the larger differences between the SSMIS measurements of wind speed and vapor and other sources of these data, consistent with the influence of more sources of uncertainty.
Improved estimates of coordinate error for molecular replacement
Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.
2013-11-01
A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.
Error threshold estimates for surface code with loss of qubits
NASA Astrophysics Data System (ADS)
Ohzeki, Masayuki
2012-06-01
We estimate optimal thresholds for surface codes in the presence of loss via an analytical method developed in statistical physics. The optimal threshold for the surface code is closely related to a special critical point in a finite-dimensional spin glass, which is disordered magnetic material. We compare our estimations to the heuristic numerical results reported in earlier studies. Further application of our method to the depolarizing channel, a natural generalization of the noise model, unveils its wider robustness even with loss of qubits.
Error threshold estimates for surface code with loss of qubits
Masayuki Ohzeki
2012-06-04
We estimate optimal thresholds for surface code in the presence of loss via an analytical method developed in statistical physics. The optimal threshold for the surface code is closely related to a special critical point in a finite-dimensional spin glass, which is disordered magnetic material. We compare our estimations to the heuristic numerical results reported in earlier studies. Further application of our method to the depolarizing channel, a natural generalization of the noise model, unveils its wider robustness even with loss of qubits.
Miller, Brandon; Littenberg, Tyson B; Farr, Ben
2015-01-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic followup facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the tradeoff between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (STF2), in parameter estimation with lalinference_mcmc. Though efficient, the STF2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically-motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, ...
EIA Corrects Errors in Its Drilling Activity Estimates Series
1998-01-01
The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.
Implicit polynomial representation through a fast fitting error estimation.
Rouhani, Mohammad; Sappa, Angel Domingo
2012-04-01
This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed approach. PMID:21965211
Short Communication DNA Sequence Error Rates in Genbank Records Estimated
Keightley, Peter
as a Reference PHILIPP L. WESCHE, DANIEL J. GAFFNEY and PETER D. KEIGHTLEY* University of Edinburgh, School, such as in estimating the rate of nucleotide substitution or recombination (Clark and Whittam, 1992). The consequences-0-131-650-5443. E-mail: peter.keightley@ed.ac.uk DNA Sequence, October/December 2004 Vol. 15 (5/6), pp. 362364 #12
Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
Ana M. Bianco; Marta Garc; J. Yohai
2003-01-01
Abstract In this paper we propose a class of robust estimates for regression with errors with log-gamma distribution. These estimates, which are a natural extension of the MM-estimates proposed by Yohai for ordinary regression, may combine simultaneously high asymptotically e?ciency and a high breakdown point. This paper focuses on the log-gamma regression model, but the estimates can be ex- tended
Takafumi Kanamori; Ichiro Takeuchi
2006-01-01
In this paper we propose a new estimator for regression problems in the form of the linear combination of quantile regressions. The proposed estimator is helpful for the conditional mean estimation when the error distribution is asymmetric and heteroscedastic.It is shown that the proposed estimator has the consistency under heteroscedastic regression model: Y=?(X)+?(X)·e, where X is a vector of covariates,
Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.
Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.
2005-07-01
An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.
O'Brien, S.; Azmy, Y. Y.
2013-07-01
When calculating numerical solutions of the neutron transport equation it is important to have a measure of the accuracy of the solution. As the true solution is generally not known, a suitable estimation of the error must be made. The steady state transport equation possesses discretization errors in all its independent variables: angle, energy and space. In this work only spatial discretization errors are considered. An exact transport solution, in which the degree of regularity of the exact flux across the singular characteristic is controlled, is manufactured to determine the numerical solutions true discretization error. This solution is then projected onto a Legendre polynomial space in order to form an exact solution on the same basis space as the numerical solution, Discontinuous Galerkin Finite Element Method (DGFEM), to enable computation of the true error. Over a series of test problems the true error is compared to the error estimated by: Ragusa and Wang (RW), residual source (LER) and cell discontinuity estimators (JD). The validity and accuracy of the considered estimators are primarily assessed by considering the effectivity index and global L2 norm of the error. In general RW excels at approximating the true error distribution but usually under-estimates its magnitude; the LER estimator emulates the true error distribution but frequently over-estimates the magnitude of the true error; the JD estimator poorly captures the true error distribution and generally under-estimates the error about singular characteristics but over-estimates it elsewhere. (authors)
Soil Moisture Background Error Covariance Estimation in a Land-Atmosphere Coupled Model
NASA Astrophysics Data System (ADS)
Lin, L. F.; Ebtehaj, M.; Flores, A. N.; Wang, J.; Bras, R. L.
2014-12-01
The objective of this study is to estimate space-time dynamics of the soil moisture background error in a coupled land-atmosphere model for better understanding the land-atmosphere interactions and soil moisture dynamics through data assimilation. To this end, we conducted forecast experiments in eight calendar years from 2006 to 2013 using the Weather Research and Forecasting (WRF) model coupled with the Noah land surface model and estimated the background error statistics based on the National Meteorological Center (NMC) methodology. All the WRF-Noah simulations were initialized with the National Centers for Environmental Prediction (NCEP) FNL operational global analysis dataset. In our study domain, covering the contiguous United States, the results show that the soil moisture background error exhibits strong seasonal and regional patterns, with the highest magnitude occurring during the summer at the top soil layer over most regions of the Great Plains. It is also revealed that the soil moisture background errors are strongly biased at some regions, especially Southeastern United States, and bias impacts the magnitude of the error from top to bottom soil layer in an increasing order. Moreover, we also found that the estimated background error is not sensitive to the selection of WRF physics schemes of microphysics, cumulus parameterization, and land surface model. Overall, this study enhances our understanding on the space-time variability of the soil moisture background error and promises more accurate land-surface state estimates via variational data assimilation.
How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?
NASA Astrophysics Data System (ADS)
Gebregiorgis, A. S.; Hossain, F.
2014-12-01
The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.
Heath, G.
2012-06-01
This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.
A mission to test the Pioneer anomaly: estimating the main systematic effects
O. Bertolami; J. Paramos
2007-06-20
We estimate the main systematic effects relevant in a mission to test and characterize the Pioneer anomaly through the flight formation concept, by launching probing spheres from a mother spacecraft and tracking their motion via laser ranging.
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 ?g l-1 for training; 106.13 ?g l-1 for validation) outperforms the BPNN (RMSE: 121.54 ?g l-1 for training; 143.37 ?g l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 ?g l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.
Measurement Error Webinar Series: Introduction to the problem of measurement error in dietary data
Identify random and systematic errors that may occur in dietary assessment and their impact on estimates of dietary intakes. Describe statistical concepts underpinning approaches to correcting for measurement error in self-report dietary intake data.
NASA Astrophysics Data System (ADS)
Jiang, Li-ping
2008-04-01
Error analyses are made of ACR (Astrometric Calibration Regions along the celestial equator) and CMC13 (Carlsberg Meridian Catalogue 13), two astrometric catalogues compiled on the basis of CCD drift scanning observations and published respectively before and after 2000. Through a comparison with the UCAC2 (the second U.S. Naval Observatory CCD Astrograph Catalogue), the form and size of the errors are analyzed numerically. The main and possible sources of the errors are analyzed from the standpoint of observing mode and data reduction. It is found that there is evident magnitude difference between the ACR and CMC13 in the equatorial direction, and that there exists periodic variation close to the CCD field of view along the right ascension and also a systematic variation close to the size of reduction zone along the declination.
NASA Astrophysics Data System (ADS)
Santamaría, L.; Ohme, F.; Ajith, P.; Brügmann, B.; Dorband, N.; Hannam, M.; Husa, S.; Mösta, P.; Pollney, D.; Reisswig, C.; Robinson, E. L.; Seiler, J.; Krishnan, B.
2010-09-01
We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.
Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B.; Ajith, P.; Bruegmann, B.; Hannam, M.; Husa, S.; Pollney, D.; Reisswig, C.; Seiler, J.
2010-09-15
We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.
Estimating misclassification error: a closer look at cross-validation based methods
2012-01-01
Background To estimate a classifier’s error in predicting future observations, bootstrap methods have been proposed as reduced-variation alternatives to traditional cross-validation (CV) methods based on sampling without replacement. Monte Carlo (MC) simulation studies aimed at estimating the true misclassification error conditional on the training set are commonly used to compare CV methods. We conducted an MC simulation study to compare a new method of bootstrap CV (BCV) to k-fold CV for estimating clasification error. Findings For the low-dimensional conditions simulated, the modest positive bias of k-fold CV contrasted sharply with the substantial negative bias of the new BCV method. This behavior was corroborated using a real-world dataset of prognostic gene-expression profiles in breast cancer patients. Our simulation results demonstrate some extreme characteristics of variance and bias that can occur due to a fault in the design of CV exercises aimed at estimating the true conditional error of a classifier, and that appear not to have been fully appreciated in previous studies. Although CV is a sound practice for estimating a classifier’s generalization error, using CV to estimate the fixed misclassification error of a trained classifier conditional on the training set is problematic. While MC simulation of this estimation exercise can correctly represent the average bias of a classifier, it will overstate the between-run variance of the bias. Conclusions We recommend k-fold CV over the new BCV method for estimating a classifier’s generalization error. The extreme negative bias of BCV is too high a price to pay for its reduced variance. PMID:23190936
Corrected-loss estimation for quantile regression with covariate measurement errors
Wang, Huixia Judy; Stefanski, Leonard A.; Zhu, Zhongyi
2012-01-01
We study estimation in quantile regression when covariates are measured with errors. Existing methods require stringent assumptions, such as spherically symmetric joint distribution of the regression and measurement error variables, or linearity of all quantile functions, which restrict model flexibility and complicate computation. In this paper, we develop a new estimation approach based on corrected scores to account for a class of covariate measurement errors in quantile regression. The proposed method is simple to implement. Its validity requires only linearity of the particular quantile function of interest, and it requires no parametric assumptions on the regression error distributions. Finite-sample results demonstrate that the proposed estimators are more efficient than the existing methods in various models considered. PMID:23843665
Corrected-loss estimation for quantile regression with covariate measurement errors.
Wang, Huixia Judy; Stefanski, Leonard A; Zhu, Zhongyi
2012-06-01
We study estimation in quantile regression when covariates are measured with errors. Existing methods require stringent assumptions, such as spherically symmetric joint distribution of the regression and measurement error variables, or linearity of all quantile functions, which restrict model flexibility and complicate computation. In this paper, we develop a new estimation approach based on corrected scores to account for a class of covariate measurement errors in quantile regression. The proposed method is simple to implement. Its validity requires only linearity of the particular quantile function of interest, and it requires no parametric assumptions on the regression error distributions. Finite-sample results demonstrate that the proposed estimators are more efficient than the existing methods in various models considered. PMID:23843665
Systematic errors in respiratory gating due to intrafraction deformations of the liver
Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C. [Computer Vision Laboratory, ETH Zurich, 8092 Zurich (Switzerland); Division of Radiation Medicine, Paul Scherrer Institut, 5232 Villigen PSI (Switzerland); Computer Vision Laboratory, ETH Zurich, 8092 Zurich (Switzerland)
2007-09-15
This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift.
Improved Margin of Error Estimates for Proportions in Business: An Educational Example
ERIC Educational Resources Information Center
Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael
2015-01-01
This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Error backpropagation neural calibration and Kalman filter position estimation for touch panels
Chih-Chang Lai; Ching-Chih Tsai; Han-Chang Lin
2004-01-01
This work develops a methodology and technique for calibration and dynamic touching position estimation of touch panels using error backpropagation neural networks (EBPNN) and Kalman filter. A neural-based calibration method is presented to determine the nonlinear mapping relationships of the measured and known touch points, and then calibrate their positions in a real-time manner. In order to obtain position estimation
St Andrews, University of
Eccentricity Error Correction for Automated Estimation of Polyethylene Wear after Total Hip 9SY Abstract. Acetabular wear of total hip replacements can be estimated from radiographs based cause of aseptic loosening of total hip replacements, leading to costly revision surgery and patient
NASA Astrophysics Data System (ADS)
Jeon, H.; Shin, J. U.; Myung, H.
2013-04-01
Visually servoed paired structured light system (ViSP) has been found to be useful in estimating 6-DOF relative displacement. The system is composed of two screens facing each other, each with one or two lasers, a 2-DOF manipulator and a camera. The displacement between two sides is estimated by observing positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive structures, the whole area should be partitioned and each ViSP module is placed in each partition in a cascaded manner. The estimated displacement between adjoining ViSPs is combined with the next partition so that the entire movement of the structure can be estimated. The multiple ViSPs, however, have a major problem that the error is propagated through the partitions. Therefore, a displacement estimation error back-propagation (DEEP) method which uses Newton-Raphson or gradient descent formulation inspired by the error back-propagation algorithm is proposed. In this method, the estimated displacement from the ViSP is updated using the error back-propagated from a fixed position. To validate the performance of the proposed method, various simulations and experiments have been performed. The results show that the proposed method significantly reduces the propagation error throughout the multiple modules.
Facial motion parameter estimation and error criteria in model-based image coding
NASA Astrophysics Data System (ADS)
Liu, Yunhai; Yu, Lu; Yao, Qingdong
2000-04-01
Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.
J. N. Sheahan
1988-01-01
In this paper, we study the linear model where the distribution of the i.i.d, errors has a normal centre with unknown scale and completely arbitrary tails. Consistent and asymptotically normal robust estimators of the regression parameter vector are obtained by: (i) robust Jkf-estimation. of the regression vector only, and by (il) simultaneous robust \\/if-estimation of the regression vector and the
Estimation of a Gamma Scale Parameter Under Asymmetric Squared Log Error Loss
N. Sanjari Farsipour; H. Zakerzadeh
2005-01-01
Estimation of scale parameter under the squared log error loss function is considered with restriction to the principle of invariance and risk unbiasedness. An explicit form of minimum risk scale-equivariant estimator under this loss is obtained. The admissibility and inadmissibility of a class of linear estimators of the form (cT + d) are considered, where T follows a gamma distribution with an
Asymmetric Errors in Linear Models: Estimation—Theory and Monte Carlo
J. C. Lind; K. L. Mehra; J. N. Sheahan
1992-01-01
In the linear model where the distribution of the i.i.d. errors is completely unknown outside a specified interval, an asymptotically optimal robust M-estimator of the regression parameter vector is constructed. We study a variety of initial values for the iterative computation of this estimator, its finite sample properties are investigated by simulation, and it is compared with estimators that appear
Doherty, Carole; Stavropoulou, Charitini
2012-07-01
This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias. PMID:12803649
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.
Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma
2015-09-01
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196?m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23?m and reached 82?m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100?m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates
Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma
2015-01-01
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023
Errors in LiDAR ground elevation and wetland vegetation height estimates
C. Hopkinson; L. E. Chasmer; Gabor Zsigovics; I. F. Creed; Michael Sitar; P. Treitz; V. Maher
Abstract: An airborne scanning LiDAR (light detection and ranging) survey using a small footprint discrete pulse return Airborne Laser Terrain Mapper (ALTM) was conducted over the Utikuma Boreal wetland area ofnorthern Alberta in August 2002. These data were analysed to quantify systematic LiDAR errors in: a) ground surface elevation; and b) vegetation canopy,surface height. Of the vegetation classes, aquatic vegetation
Faber
2000-10-01
A multivariate calibration model consists of regression coefficient estimates whose significance depends on the associated standard errors. A recently introduced leave-one-out (LOO) method for computing these standard errors is modified to achieve consistency with the jack-knife method. The proposed modification amounts to multiplying the LOO standard errors with the factor (n - 1)/n1/2, where n denotes the number of calibration samples. The potential improvement for realistic values of n is illustrated using a practical example. PMID:11028629
Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)
2002-01-01
This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.
Error estimation procedure for large dimensionality data with small sample sizes
NASA Astrophysics Data System (ADS)
Williams, Arnold; Wagner, Gregory
2009-05-01
Using multivariate data analysis to estimate the classification error rates and separability between sets of data samples is a useful tool for understanding the characteristics of data sets. By understanding the classifiability and separability of the data, one can better direct the appropriate resources and effort to achieve the desired performance. The following report describes our procedure for estimating the separability of given data sets. The multivariate tools described in this paper include calculating the intrinsic dimensionality estimates, Bayes error estimates, and the Friedman-Rafsky tests. These analysis techniques are based on previous work used to evaluate data for synthetic aperture radar (SAR) automatic target recognition (ATR), but the current work is unique in the methods used to analyze large dimensionality sets with a small number of samples. The results of this report show that our procedure can quantitatively measure the performance between two data sets in both the measure and feature space with the Bayes error estimator procedure and the Friedman- Rafsky test, respectively. Our procedure, which included the error estimation and Friedman-Rafsky test, is used to evaluate SAR data but can be used as effective ways to measure the classifiability of many other multidimensional data sets.
Systematic Uncertainties in Stellar Mass Estimation for Distinct Galaxy Populations
Kannappan, S J; Kannappan, Sheila J.; Gawiser, Eric
2007-01-01
We show that different stellar-mass estimation methods yield overall mass scales that disagree by factors up to ~2 for the z=0 galaxy population, and more importantly, relative mass scales that sometimes disagree by factors >~3 between distinct classes of galaxies (spiral/irregular types, classical E/S0s, and E/S0s whose colors reflect recent star formation). This comparison considers stellar mass estimates based on (a) two different calibrations of the correlation between K-band mass-to-light ratio and B-R color (Bell et al., Portinari et al.) and (b) detailed fitting of UBRJHK photometry and optical spectrophotometry using two different population synthesis models (Bruzual-Charlot, Maraston), with the same initial mass function in all cases. We also compare stellar+gas masses with dynamical masses. This analysis offers only weak arguments for preferring a particular stellar-mass estimation method, given the plausibility of real variations in dynamical properties and dark matter content. These results help t...
NASA Astrophysics Data System (ADS)
Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.
2013-10-01
A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.
Alpha's standard error (ASE): an accurate and precise confidence interval estimate.
Duhachek, Adam; Lacobucci, Dawn
2004-10-01
This research presents the inferential statistics for Cronbach's coefficient alpha on the basis of the standard statistical assumption of multivariate normality. The estimation of alpha's standard error (ASE) and confidence intervals are described, and the authors analytically and empirically investigate the effects of the components of these equations. The authors then demonstrate the superiority of this estimate compared with previous derivations of ASE in a separate Monte Carlo simulation. The authors also present a sampling error and test statistic for a test of independent sample alphas. They conclude with a recommendation that all alpha coefficients be reported in conjunction with standard error or confidence interval estimates and offer SAS and SPSS programming codes for easy implementation. PMID:15506861
Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'
NASA Technical Reports Server (NTRS)
Errico, Ronald M.; Prive, Nikki C.; Gu, Wei
2014-01-01
The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.
NASA Astrophysics Data System (ADS)
Zhan, Yuxuan; Eggebrecht, Adam; Dehghani, Hamid; Culver, Joseph
2011-02-01
In MRI-guided diffuse optical tomography of the human brain function, three-dimensional anatomical head model consisting of up to five segmented tissue types can be specified. With disregard to misclassification between different tissues, uncertainty in the optical properties of each tissue type becomes the dominant cause of systematic error in image reconstruction. In this study we present a quantitative evaluation of image resolution dependence due to such uncertainty. Our results show that given a head model which provides a realistic description of its tissue optical property distribution, high-density diffuse optical tomography with cortically constrained image reconstruction are capable of detecting focal activation up to 21.81 mm below the human scalp at an imaging quality better than or equal to 1.0 cm in localization error and 1.0 cm3 in FVHM with a tolerance of uncertainty in tissue optical properties between +15% and -20%.
Estimated global incidence of Japanese encephalitis: a systematic review
Campbell, Grant L; Hills, Susan L; Fischer, Marc; Jacobson, Julie A; Hoke, Charles H; Hombach, Joachim M; Marfin, Anthony A; Solomon, Tom; Tsai, Theodore F; Tsu, Vivien D
2011-01-01
Abstract Objective To update the estimated global incidence of Japanese encephalitis (JE) using recent data for the purpose of guiding prevention and control efforts. Methods Thirty-two areas endemic for JE in 24 Asian and Western Pacific countries were sorted into 10 incidence groups on the basis of published data and expert opinion. Population-based surveillance studies using laboratory-confirmed cases were sought for each incidence group by a computerized search of the scientific literature. When no eligible studies existed for a particular incidence group, incidence data were extrapolated from related groups. Findings A total of 12 eligible studies representing 7 of 10 incidence groups in 24 JE-endemic countries were identified. Approximately 67?900 JE cases typically occur annually (overall incidence: 1.8 per 100?000), of which only about 10% are reported to the World Health Organization. Approximately 33?900 (50%) of these cases occur in China (excluding Taiwan) and approximately 51?000 (75%) occur in children aged 0–14 years (incidence: 5.4 per 100?000). Approximately 55?000 (81%) cases occur in areas with well established or developing JE vaccination programmes, while approximately 12?900 (19%) occur in areas with minimal or no JE vaccination programmes. Conclusion Recent data allowed us to refine the estimate of the global incidence of JE, which remains substantial despite improvements in vaccination coverage. More and better incidence studies in selected countries, particularly China and India, are needed to further refine these estimates. PMID:22084515
Analysis of systematic errors in lateral shearing interferometry for EUV optical testing
Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.
2009-02-24
Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.
Estimation of bias errors in measured airplane responses using maximum likelihood method
NASA Technical Reports Server (NTRS)
Klein, Vladiaslav; Morgan, Dan R.
1987-01-01
A maximum likelihood method is used for estimation of unknown bias errors in measured airplane responses. The mathematical model of an airplane is represented by six-degrees-of-freedom kinematic equations. In these equations the input variables are replaced by their measured values which are assumed to be without random errors. The resulting algorithm is verified with a simulation and flight test data. The maximum likelihood estimates from in-flight measured data are compared with those obtained by using a nonlinear-fixed-interval-smoother and an extended Kalmar filter.
Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew
2003-02-01
The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve?
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Li, Ying; Wang, JiWen; Zhong, XiaoJing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin
2014-01-01
Objective To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Design Systematic review and meta-analysis. Methods We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Main outcome measures Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Results Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06–1.20), 0.75 (0.56–0.94), 1.10 (1.07–1.14), and 0.79 (0.73–0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02–1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P?=?0.001, I2?=?72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Conclusion Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such associations. PMID:24603619
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Assumption-free estimation of the genetic contribution to refractive error across childhood
St Pourcain, Beate; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Williams, Cathy
2015-01-01
Purpose Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75–90%, families 15–70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Methods Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). Results The variance in refractive error explained by the SNPs (“SNP heritability”) was stable over childhood: Across age 7–15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8–9 years old. Conclusions Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population. PMID:26019481
Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps
Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.
1996-01-04
The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.
Rozo, Eduardo; Wu, Hao-Yi; Schmidt, Fabian; /Caltech
2011-11-04
When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.
Upper bounds on position error of a single location estimate in wireless sensor networks
NASA Astrophysics Data System (ADS)
Gholami, Mohammad Reza; Ström, Erik G.; Wymeersch, Henk; Gezici, Sinan
2014-12-01
This paper studies upper bounds on the position error for a single estimate of an unknown target node position based on distance estimates in wireless sensor networks. In this study, we investigate a number of approaches to confine the target node position to bounded sets for different scenarios. Firstly, if at least one distance estimate error is positive, we derive a simple, but potentially loose upper bound, which is always valid. In addition assuming that the probability density of measurement noise is nonzero for positive values and a sufficiently large number of distance estimates are available, we propose an upper bound, which is valid with high probability. Secondly, if a reasonable lower bound on negative measurement errors is known a priori, we manipulate the distance estimates to obtain a new set with positive measurement errors. In general, we formulate bounds as nonconvex optimization problems. To solve the problems, we employ a relaxation technique and obtain semidefinite programs. We also propose a simple approach to find the bounds in closed forms. Simulation results show reasonable tightness for different bounds in various situations.
Systematic Errors in Stereo PIV When Imaging through a Glass Window
NASA Technical Reports Server (NTRS)
Green, Richard; McAlister, Kenneth W.
2004-01-01
This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.
NASA Astrophysics Data System (ADS)
Evgenev, V. S.
A formula for calculating the constant component of the dynamic error of a pendulum accelerometer mounted on a translationally vibrating base is obtained which allows for the finite stiffness of the pendulum suspension supports and internal friction in the material. It is shown that even with a uniformly stiff suspension design, the elastic compliance of the suspension increases the systematic error of the accelerometer.
Cheng, Ying; Liu, Cheng; Behrens, John
2015-09-01
While estimation bias is a primary concern in psychological and educational measurement, the standard error is of equal importance in linking key aspects of the assessment structure, especially when the assessment goal concerns the classification of individuals into categories (e.g., master/non-mastery). In this paper, we show analytically how standard error of ability estimates affects expected classification accuracy and consistency when the decision is binary. When standard error decreases, the conditional classification accuracy and consistency increase. Given an examinee population and a cut score, smaller standard error over the entire latent trait continuum guarantees higher overall expected classification accuracy and consistency. We were also able to show the interrelationship between standard error, the expected classification consistency, and reliability. Utilizing the relationship between standard error and expected classification accuracy and consistency, we derive the upper bounds of the overall expected classification accuracy and consistency of a fixed-length computerized adaptive test. The lower bound of the expected classification accuracy and consistency is also derived given a number of stopping rules of variable-length computerized adaptive testing. Implications of these analytical results on operational tests are discussed. PMID:25228494
NASA Astrophysics Data System (ADS)
Dressel, Justin; Nori, Franco
2014-02-01
We revisit the definitions of error and disturbance recently used in error-disturbance inequalities derived by Ozawa and others by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on an implicit assumption that is generally incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-semidefinite joint quasiprobability distribution. For unbiased measurements, the error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to directly measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect distributional estimations can have nonzero error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that is appropriate for ensembles being probed by all outcomes of an apparatus. To reconnect with the discussion of Heisenberg, we suggest alternative definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Feiveson, A. H. (principal investigator)
1979-01-01
Aggregation formulas are given for production estimation of a crop type for a zone, a region, and a country, and methods for estimating yield prediction errors for the three areas are described. A procedure is included for obtaining a combined yield prediction and its mean-squared error estimate for a mixed wheat pseudozone.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 ? error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 ? errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Empirical likelihood estimators for the error distribution in nonparametric regression models
S. Kiwitt; E.-R. Nagel; N. Neumeyer
2008-01-01
The aim of this paper is to show that existing estimators for the error distribution in non-parametric regression models can\\u000a be improved when additional information about the distribution is included by the empirical likelihood method. The weak convergence\\u000a of the resulting new estimator to a Gaussian process is shown and the performance is investigated by comparison of asymptotic\\u000a mean squared
Effects of Destriping Errors on Estimates of the CMB Power Spectrum
G. Efstathiou
2004-07-28
Destriping methods for constructing maps of the Cosmic Microwave Background (CMB) anisotropies have been investigated extensively in the literature. However, their error properties have been studied in less detail. Here we present an analysis of the effects of destriping errors on CMB power spectrum estimates for Planck-like scanning strategies. Analytic formulae are derived for certain simple scanning geometries that can be rescaled to account for different detector noise. Assuming {Planck-like low-frequency noise, the noise power spectrum is accurately white at high multipoles (l<50). D estriping errors, though dominant at lower multipoles, are small in comparison to the cosmic variance. These results show that simple destriping map-making methods should be perfectly adequate for the analysis of Planck data and support the arguments given in an earlier paper in favour of applying a fast hybrid power spectrum estimator to CMB data with realistic `1/f' noise.
A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images
NASA Astrophysics Data System (ADS)
Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong
2010-10-01
Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.
NASA Astrophysics Data System (ADS)
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-06-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Estimating Error in an Analysis of Forest Fragmentation Change Using North
Brown, Daniel G.
Estimating Error in an Analysis of Forest Fragmentation Change Using North American Landscape be used to identify locationserror in an analysis of forest fragmentation dynamics. that exhibit change. Forest fragmentation and the presence of detection analysis than the others (percent forest cover edge
State-space framework for estimating measurement error from double-tagging telemetry experiments
Costa, Daniel P.
-tagging experiments on 111 animals representing seven marine species including 4 sharks, 2 birds and 1 pinniped. StudyState-space framework for estimating measurement error from double-tagging telemetry experiments , Daniel P. Costa4 and Barbara A. Block2 1 Department of Biology, Dalhousie University, Halifax, NS B3H 4J1
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Mapping the Origins of Time: Scalar Errors in Infant Time Estimation
ERIC Educational Resources Information Center
Addyman, Caspar; Rocha, Sinead; Mareschal, Denis
2014-01-01
Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…
On the accuracy of estimating pest insect abundance from data with random error
Petrovskaya, Natalia B.
On the accuracy of estimating pest insect abundance from data with random error Nina Embleton. Introduction Accurate evaluation of pest insect abundance is a key component in any integrated pest management 2014 Keywords: Pest insect monitoring Noise Numerical integration Trapezoidal rule Simpson's rule A B
Error estimations for source inversions in seismology and Rivera, L.(1)
Duputel, Zacharie
Error estimations for source inversions in seismology and geodesy Rivera, L.(1) , Duputel, Z.(1.rivera@unistra.fr (2) DPRI, Kyoto University, Kyoto, Japan email fukahata@rcep.dpri.kyoto-u.ac.jp (3) Seismological used tool. It comes in very diverse flavors depending on the nature of data (e.g. seismological
Error estimate of the series solution to a class of nonlinear fractional differential equations
NASA Astrophysics Data System (ADS)
El-Kalla, I. L.
2011-03-01
In this paper, two different formulas for the accelerated Adomian polynomials are considered. The discussion demonstrates that both formulas are identically the same and provide the fastest rate of convergence for the nonlinear term. One of the two formulas, suggested by the author, is used directly to estimate the maximum absolute truncated error of the Adomian series solution.
Error estimation for reconstruction of neuronal spike firing from fast calcium imaging
Liu, Xiuli; Lv, Xiaohua; Quan, Tingwei; Zeng, Shaoqun
2015-01-01
Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Calcium transients reflect neuronal spike patterns allowing for spike train reconstructed from calcium traces. The key to judging spiking train authenticity is error estimation. However, due to the lack of an appropriate mathematical model to adequately describe this spike-calcium relationship, little attention has been paid to quantifying error ranges of the reconstructed spike results. By turning attention to the data characteristics close to the reconstruction rather than to a complex mathematic model, we have provided an error estimation method for the reconstructed neuronal spiking from calcium imaging. Real false-negative and false-positive rates of 10 experimental Ca2+ traces were within the estimated error ranges and confirmed that this evaluation method was effective. Estimation performance of the reconstruction of spikes from calcium transients within a neuronal population demonstrated a reasonable evaluation of the reconstructed spikes without having real electrical signals. These results suggest that our method might be valuable for the quantification of research based on reconstructed neuronal activity, such as to affirm communication between different neurons. PMID:25780733
ERROR ESTIMATES FOR FINITE ELEMENT METHODS FOR A WIDEANGLE PARABOLIC EQUATION
Akrivis, Georgios
ERROR ESTIMATES FOR FINITE ELEMENT METHODS FOR A WIDEÂANGLE PARABOLIC EQUATION GEORGIOS D. AKRIVISÂvalue problem for the thirdÂ order wideÂangle parabolic approximation of underwater acoustics with depth)]v. The thirdÂorder p.d.e. (1.1) occurs in problems of wave propagation as a wideÂangle, parabolic approximation
Least Squares Support Vector Machines for Direction of Arrival Estimation with Error Control
Least Squares Support Vector Machines for Direction of Arrival Estimation with Error Control,christos]@eece.unm.edu Abstract-- This paper presents a multiclass, multilabel imple- mentation of Least Squares Support Vector support vector machines (SVM), have successfully been applied to wireless communication problems, notably
NASA Technical Reports Server (NTRS)
Gunderson, R. W.; George, J. H.
1974-01-01
Two approaches are investigated for obtaining estimates on the error between approximate and exact solutions of dynamic systems. The first method is primarily useful if the system is nonlinear and of low dimension. The second requires construction of a system of v-functions but is useful for higher dimensional systems, either linear or nonlinear.
Some Improved Error Estimates for the Modi ed Method of Characteristics
Russell, Thomas F.
Some Improved Error Estimates for the Modi#12;ed Method of Characteristics C. N. Dawson 1 , T. F birthday. 1 Introduction The modi#12;ed method of characteristics (MMOC) was #12;rst formulated approximations to advection-dominated problems. Basically, in the modi#12;ed method of characteristics, one
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2008-01-01
This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…
ERROR ESTIMATES FOR DIFFERENCE APPROXIMATIONS OF DEGENERATE CONVECTION-DIFFUSION EQUATIONS
for a class of simple difference schemes for nonlinear and strongly degenerate convection-diffusion problemsL1 ERROR ESTIMATES FOR DIFFERENCE APPROXIMATIONS OF DEGENERATE CONVECTION-DIFFUSION EQUATIONS K. H difference schemes for degenerate convection-diffusion equations in one spatial dimension. These nonlinear
Paris-Sud XI, Université de
) formulation is developed to simulate the different stages of the Friction Stir Welding (FSW) process process. Keywords: ALE, Simulation, Friction Stir Welding, Error Estimation. INTRODUCTION Downward force of FSW process. In the Friction Stir Welding (FSW) process, the plunging phase is followed
Error estimation of bathymetric grid models derived from historic and contemporary datasets
New Hampshire, University of
1 Error estimation of bathymetric grid models derived from historic and contemporary datasets and rapidly collecting dense bathymetric datasets. Sextants were replaced by radio navigation, then transit, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign
The Impact of Channel Estimation Errors on Space Time Block Codes
Valenti, Matthew C.
The Impact of Channel Estimation Errors on Space Time Block Codes Dirk Baker and Matthew C. Valenti@csee.wvu.edu, mvalenti@wvu.edu Abstract In this paper, we demonstrate the perform- ance of space-time block codes when- terize the performance of a space-time block code using two transmit antennas, one receive antenna
Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions
NASA Astrophysics Data System (ADS)
Toth, E.
2015-06-01
In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.
Mathur, Anuj
1994-01-01
In this work we study the pollution-error in the h-version of the finite element method and its effect on the local quality of a-posteriori error estimators. We show that the pollution-effect in an interior subdomain depends on the relationship...
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Karel, S.
1975-01-01
An algorithm for solving the nonlinear stationary Navier-Stokes problem is developed. Explicit error estimates are given. This mathematical technique is potentially adaptable to the separation problem.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2? uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2? uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
Climate change projection with reduced model systematic error over tropical Pacific
NASA Astrophysics Data System (ADS)
Keenlyside, Noel; Shen, Mao-Lin; Selten, Frank; Wiegerinck, Wim; Duane, Gregory
2015-04-01
The tropical Pacific is a major driver of the global climate system. Climate models, however, have difficulties in realistically simulating the region: typically, they have a too pronounced equatorial cold tongue, an erroneous double inter-tropical convergence zone (ITCZ), and poorly represent of ocean-atmosphere interaction. These errors introduce large uncertainties into climate change projections. Here we assess the impact of these errors by performing climate change projections with an interactive model ensemble (SUMO) that has a reduced tropical Pacific error. SUMO consists of one ocean model coupled to two atmospheric models, which only differ in their representation of atmospheric convection. Through optimal coupling weights, synchronization of the atmospheric models over tropical Pacific is enhanced and there the simulation of climate is dramatically improved: The model realistically simulates the equatorial cold tongue, a single ITCZ, and the Bjerknes positive feedback. SUMO also simulates interannual variability better than the two individual coupled ocean-atmosphere models (i.e., based on the two different atmospheric models). Global warming predicted by SUMO lies between that of the two individual coupled models. However, the projections for the tropical Pacific differ markedly. SUMO simulates a weakening of the zonal SST gradient, while the two individual coupled models simulate a strengthening. The weaker zonal SST gradient leads to around 20% weakening of the Walker Circulation, and there is an increase of precipitation over the entire tropics. In contrast, the two individual coupled models simulate an eastward shift of the Walker Circulation, and enhancement of precipitation over the western Pacific. The differences are related to the representation of ocean-atmosphere interaction. This underscores the importance of improving the simulation of tropical Pacific to reduce uncertainties in climate change projection.
ERIC Educational Resources Information Center
Penfield, Randall D.
2007-01-01
The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…
The standard error of a weighted mean concentration—II. Estimating confidence intervals
NASA Astrophysics Data System (ADS)
Gatz, Donald F.; Smith, Luther
One motivation for estimating the standard error, SEM w, of a weighted mean concentration, Mw, of an ion in precipitation is to use it to compute a confidence interval for Mw. Typically this is done by multiplying the standard error by a factor that depends on the degree of confidence one wishes to express, on the assumption that the weighted mean has a normal distribution. This paper compares confidence intervals of Mw concentrations of ions in precipitation, as computed using the assumption of a normal distribution, with those estimated from distributions produced by bootstrapping. The hypothesis that Mw was normally distributed was rejected about half the time (at the 5% significance level) in tests involving nine major ions measured at ten diverse sites in the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). Most of these rejections occurred at sites with fewer than 100 samples, in agreement with previous results. Nevertheless, the hypothesis was often rejected at sites with more than 100 samples as well. The maximum error (relative to Mw) in the 95% confidence limits made by assuming a normal distribution of the Mw at the ten sites examined was about 27%. Most such errors were less than 10%, and errors were smaller at sampling sites with > 100 samples than at those with < 100 samples.
Efficient error estimation criteria to capture vortical structures in octree meshes
NASA Astrophysics Data System (ADS)
Ozhan, Cansu; Fuster, Daniel; da Costa, Patrick
2013-11-01
This paper aims at finding optimal adaptive mesh refinement strategies to capture vortical structures. Due to their efficiency, we focus on a-posteriori mesh refinement methods. In particular, we derive a Hessian error estimator for the h-refinement scheme and a residual-based error estimator for finite volume methods and octree grids. The methods are validated for a classical test for the solution of the advection-diffusion-reaction equation and tested against three different test cases where vortical structures are present. In particular we test the temporal evolution of the Lamb-Oseen vortex, the linear growth-rate of small perturbations in a shear viscous layer and the energy evolution in the isotropic turbulence case. The performance of the proposed estimators and the choice of the optimal quantity of interest is discussed for different test cases.
NASA Astrophysics Data System (ADS)
Condie, Robert
1986-06-01
The series of annual peak flows obtained from a recent continuous flow record, together with any historic floods or information, are treated as a censored sample from a three-parameter lognormal population. The logarithmic likelihood function is presented in terms of the fully specified floods, the historic information with the censoring threshold, and the parameters to be determined. Maximum likelihood estimators are given as a set of three transcendental equations, which when solved give maximum likelihood estimates of parameters. The T-year flood is expressible as a function of these parameters and the standard normal variate t. These parameters are subject to sampling variances and covariances whereas t is not. From the logarithmic likelihood function the inverse variance—covariance matrix is then derived, and by inversion gives the sampling variances and covariances of the parameters. Entering these in the general equation for the variance of estimate of a function of three variables leads to the asymptotic standard error of estimate of the T-year flood. The method is illustrated by its application to a river with historic data, where 10 yrs of only overbank flows were available in a historic period of 35 yrs prior to the collection of a systematic record. The value of the historic information is assessed in terms of reduction of the standard error of estimate, and the 10 yrs of overbank flows together with the historic information are roughly equivalent to a 26 yr extension of the systematic record.
Accurate and fast methods to estimate the population mutation rate from error prone sequences
2009-01-01
Background The population mutation rate (?) remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others). Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences). This approach is first used to derive independently the same new Watterson and Tajima estimators of ?, as recently reported by Achaz [1] for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto [2], which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of ? from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design. PMID:19671163
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
Estimating Pole/Zero Errors in GSN-IU Network Calibration Metadata
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hutt, C. R.; Bolton, H. F.; Storm, T.; Gee, L. S.
2010-12-01
Converting the voltage output of a seismometer into ground motion requires correction of the data using a description of the instrument’s response. For the Global Seismographic Network (GSN), as well as many other networks, this instrument response is represented as a Laplace pole/zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. (Many GSN stations are operated by IRIS and USGS with network code “IU”.) This Laplace representation assumes that the seismometer behaves as a perfectly linear system, with temporal changes described adequately through multiple epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We developed an iterative three-step method to estimate instrument response model parameters (poles, zeros, and sensitivity and normalization parameters) and their associated errors using random calibration signals. First, we solve a coarse non-linear inverse problem using a least squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records. Second, we solve a non-linear parameter estimation problem by an iterative method to obtain the least squares best-fit Laplace pole/zero model. Third, by applying the central limit theorem we estimate the errors in this pole/zero model by solving the inverse problem at each frequency in a 2/3rds-octave band centered at each best-fit pole/zero frequency. This procedure yields error estimates of the >99% confidence interval. We demonstrate this method by applying it to a number of recent IU network calibration records.
Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip
2009-08-01
Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.
Estimated Cost Savings from Reducing Errors in the Preparation of Sterile Doses of Medications
Schneider, Philip J.
2014-01-01
Abstract Background: Preventing intravenous (IV) preparation errors will improve patient safety and reduce costs by an unknown amount. Objective: To estimate the financial benefit of robotic preparation of sterile medication doses compared to traditional manual preparation techniques. Methods: A probability pathway model based on published rates of errors in the preparation of sterile doses of medications was developed. Literature reports of adverse events were used to project the array of medical outcomes that might result from these errors. These parameters were used as inputs to a customized simulation model that generated a distribution of possible outcomes, their probability, and associated costs. Results: By varying the important parameters across ranges found in published studies, the simulation model produced a range of outcomes for all likely possibilities. Thus it provided a reliable projection of the errors avoided and the cost savings of an automated sterile preparation technology. The average of 1,000 simulations resulted in the prevention of 5,420 medication errors and associated savings of $288,350 per year. The simulation results can be narrowed to specific scenarios by fixing model parameters that are known and allowing the unknown parameters to range across values found in previously published studies. Conclusions: The use of a robotic device can reduce health care costs by preventing errors that can cause adverse drug events. PMID:25477598
Sensitivity of Satellite Rainfall Estimates Using a Multidimensional Error Stochastic Model
NASA Astrophysics Data System (ADS)
Falck, A. S.; Vila, D. A.; Tomasella, J.
2011-12-01
Error propagation models of satellite precipitation fields are a key element in the response and performance of hydrological models, which depend on the reliability and availability of rainfall data. However, most of these models treat the error as an unidimensional measurement, with no consideration of the type of process involved. The limitations of unidimensional error propagation models were overcome by multidimensional error propagation stochastic models. In this study, the SREM2D (A Two-Dimensional Satellite Rainfall Error Model) was used to simulate satellite precipitation fields by inverse calibration parameters based on real data called "reference", in this case, gauge rainfall data. Sensitivity of satellite rainfall estimates from different satellite-based algorithms were investigated to be used for hydrologic simulation over the Tocantins basin, a transition area between the Amazon basin and the relative drier northeast region, using the SREM2D error propagation model. Preliminary results show that SREM2D has the potential to generate realistic ensembles of satellite rain fields to feed hydrologic models. Ongoing research is focused on the impact of rainfall ensembles simulated by SREM2D for the hydrologic modeling using the Model of Large Basin of the National Institute For Space Research (MGB-INPE) developed for Brazilian basins.
Doubly quasi-consistent parallel explicit peer methods with built-in global error estimation
NASA Astrophysics Data System (ADS)
Kulikov, G. Yu; Weiner, R.
2010-03-01
Recently, Kulikov presented the idea of double quasi-consistency, which facilitates global error estimation and control, considerably. More precisely, a local error control implemented in such methods plays a part of global error control at the same time. However, Kulikov studied only Nordsieck formulas and proved that there exists no doubly quasi-consistent scheme among those methods. Here, we prove that the class of doubly quasi-consistent formulas is not empty and present the first example of such sort. This scheme belongs to the family of superconvergent explicit two-step peer methods constructed by Weiner, Schmitt, Podhaisky and Jebens. We present a sample of s-stage doubly quasi-consistent parallel explicit peer methods of order s-1 when s=3. The notion of embedded formulas is utilized to evaluate efficiently the local error of the constructed doubly quasi-consistent peer method and, hence, its global error at the same time. Numerical examples of this paper confirm clearly that the usual local error control implemented in doubly quasi-consistent numerical integration techniques is capable of producing numerical solutions for user-supplied accuracy conditions in automatic mode.
The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds
Nash, Ulrik W.
2014-01-01
Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078
Reducing Bias and Mean Squared Error Associated With Regression-Based Odds Ratio Estimators
Lyles, Robert H.; Guo, Ying; Greenland, Sander
2012-01-01
Ratio estimators of effect are ordinarily obtained by exponentiating maximum-likelihood estimators (MLEs) of log-linear or logistic regression coefficients. These estimators can display marked positive finite-sample bias, however. We propose a simple correction that removes a substantial portion of the bias due to exponentiation. By combining this correction with bias correction on the log scale, we demonstrate that one achieves complete removal of second-order bias in odds ratio estimators in important special cases. We show how this approach extends to address bias in odds or risk ratio estimators in many common regression settings. We also propose a class of estimators that provide reduced mean bias and squared error, while allowing the investigator to control the risk of underestimating the true ratio parameter. We present simulation studies in which the proposed estimators are shown to exhibit considerable reduction in bias, variance, and mean squared error compared to MLEs. Bootstrapping provides further improvement, including narrower confidence intervals without sacrificing coverage. PMID:22962519
Local Estimation of Modeling Error in Multi-Scale Modeling of Heterogeneous Elastic Solids
Moody, Tristan
2008-03-19
Surrogate Enhance yes no ErrorError Estimation One of the most important elements in the GOAM algorithm...?u(ex)?(ey))dx (3.5) ? The total reaction force in the y-direction on a Dirichlet portion of the boundary ?u ? ??: Q(u) = integraldisplay ?u t(u)?eyds = integraldisplay ?u (E?u?n)?eyds (3.6) The quantity of interest can be a nonlinear functional on V to be suit...
Qi, P; Xia, P
2014-06-01
Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.
Petros Gaganis; Leslie Smith
2008-01-01
Two recently developed approaches to quantification of model (conceptual) error in a single groundwater model, a per-datum calibration methodology and a Bayesian model error analysis, were applied to a problem of 90Sr migration to water wells at Chernobyl, Ukraine. The intent of this composition is to demonstrate their utility to accounting for the uncertainty due to model error in estimating
Two large-scale environmental surveys, the National Stream Survey (NSS) and the Environmental Protection Agency's proposed Environmental Monitoring and Assessment Program (EMAP), motivated investigation of estimators of the variance of the Horvitz-Thompson estimator under variabl...
Jakeman, J.D. Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
Jushan Bai; Haiqiang Chen; Terence Tai-Leung Chong; Seraph Xin Wang
2008-01-01
Summary This paper considers the estimation of multiple-structural-break models under specification errors. A common example in economics is that the true model is measured in level, but a linear-log model is estimated. We show that, under specification errors, if there are more than one break points and if a single-break model is estimated, the estimated break point is consistent for
Hayes, Andrew F; Cai, Li
2007-11-01
Homoskedasticity is an important assumption in ordinary least squares (OLS) regression. Although the estimator of the regression parameters in OLS regression is unbiased when the homoskedasticity assumption is violated, the estimator of the covariance matrix of the parameter estimates can be biased and inconsistent under heteroskedasticity, which can produce significance tests and confidence intervals that can be liberal or conservative. After a brief description of heteroskedasticity and its effects on inference in OLS regression, we discuss a family of heteroskedasticity-consistent standard error estimators for OLS regression and argue investigators should routinely use one of these estimators when conducting hypothesis tests using OLS regression. To facilitate the adoption of this recommendation, we provide easy-to-use SPSS and SAS macros to implement the procedures discussed here. PMID:18183883
On the error in crop acreage estimation using satellite (LANDSAT) data
NASA Technical Reports Server (NTRS)
Chhikara, R. (principal investigator)
1983-01-01
The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.
AJR:190, May 2008 1279 standard. Systematic error is a significant
van Vliet, Lucas J.
's experience and the viewing display used has been reported [6]. Automated techniques have been intro- duced. The measurements were done on a resected Protrusion Method for Automated Estimation of Polyp Size on CT with colonoscopy [3, 4]. There is debate over the need for polypectomy for 6- to 9-mm polyps. Surveil- lance
Mass Load Estimation Errors Utilizing Grab Sampling Strategies in a Karst Watershed
NASA Astrophysics Data System (ADS)
Fogle, Alex W.; Taraba, Joseph L.; Dinger, James S.
2003-12-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
de Dieu Tapsoba, Jean; Lee, Shen-Ming; Wang, Ching-Yun
2013-01-01
Data collected in many epidemiological or clinical research studies are often contaminated with measurement errors that may be of classical or Berkson error type. The measurement error may also be a combination of both classical and Berkson errors and failure to account for both errors could lead to unreliable inference in many situations. We consider regression analysis in generalized linear models when some covariates are prone to a mixture of Berkson and classical errors and calibration data are available only for some subjects in a subsample. We propose an expected estimating equation approach to accommodate both errors in generalized linear regression analyses. The proposed method can consistently estimate the classical and Berkson error variances based on the available data, without knowing the mixture percentage. Its finite-sample performance is investigated numerically. Our method is illustrated by an application to real data from an HIV vaccine study. PMID:24009099
Arndt, Marcel
2007-01-01
We propose and analyze a goal-oriented a posteriori error estimator for the atomistic-continuum modeling error in the quasicontinuum method. Based on this error estimator, we develop an algorithm which adaptively determines the atomistic and continuum regions to compute a quantity of interest to within a given tolerance. We apply the algorithm to the computation of the structure of a crystallographic defect described by a Frenkel-Kontorova model and present the results of numerical experiments. The numerical results show that our method gives an efficient estimate of the error and a nearly optimal atomistic-continuum modeling strategy.
Marcel Arndt; Mitchell Luskin
2007-04-15
We propose and analyze a goal-oriented a posteriori error estimator for the atomistic-continuum modeling error in the quasicontinuum method. Based on this error estimator, we develop an algorithm which adaptively determines the atomistic and continuum regions to compute a quantity of interest to within a given tolerance. We apply the algorithm to the computation of the structure of a crystallographic defect described by a Frenkel-Kontorova model and present the results of numerical experiments. The numerical results show that our method gives an efficient estimate of the error and a nearly optimal atomistic-continuum modeling strategy.
NASA Astrophysics Data System (ADS)
Creuse, Emmanuel; Nicaise, Serge
2008-03-01
In this paper, we consider a discretization method proposed by Wieners and Wohlmuth [The coupling of mixed and conforming finite element discretizations, in: Domain Decomposition Methods, vol. 10, Contemporary Mathematics, vol. 218, American Mathematical Society, Providence RI, 1998, pp. 547-554] (see also [R.D. Lazarov, J. Pasciak, P.S. Vassilevski, Iterative solution of a coupled mixed and standard Galerkin discretization method for elliptic problems, Numer. Linear Algebra Appl. 8 (2001) 13-31]) for second order operators, which is a coupling between a mixed method in a subdomain and a standard Galerkin method in the remaining part of the domain. We perform an a posteriori error analysis of residual type of this method, by combining some arguments from a posteriori error analysis of Galerkin methods and mixed methods. The reliability and efficiency of the estimator are proved. Some numerical tests are presented and confirm the theoretical error bounds.
Estimating shipper/receiver measurement error variances by use of ANOVA
Lanning, B.M. )
1993-02-01
Every measurement made on nuclear material items is subject to measurement errors which are inherent variations in the measurement process that cause the measured value to differ from the true value. In practice, it is important to know the variance (or standard deviation) in these measurement errors, because this indicates the precision in reported results. If a nuclear material facility is generating paired data (e.g., shipper/receiver) where party 1 and party 2 each make independent measurements on the same items, the measurement error variance associated with both parties can be extracted. This paper presents a straightforward method for the use of standard statistical computer packages, with analysis of variance (ANOVA), to obtain valid estimates of measurement variances. Also, with the help of the P-value, significant biases between the two parties can be directly detected without reference to an F-table.
Estimation of total error in DWPF reported radionuclide inventories. Revision 1
Edwards, T.B.
1995-06-05
The Defense Waste Processing Facility (DWPF) at the Savannah River Site is required to determine and report the radionuclide inventory of its glass product. For each macro-batch, the DWPF will report both the total amount (in curies) of each reportable radionuclide and the average concentration (in curies/gram of glass) of each reportable radionuclide. The DWPF is to provide the estimated error of these reported values of its radionuclide inventory as well. The objective of this document is to provide a framework for determining the estimated error in DWPF`s reporting of these radionuclide inventories. This report investigates the impact of random errors due to measurement and sampling on the total amount of each reportable radionuclide in a given macro-batch. In addition, the impact of these measurement and sampling errors and process variation are evaluated to determine the uncertainty in the reported average concentrations of radionuclides in DWPF`s filled canister inventory resulting from each macro-batch.
Estimation of random errors for lidar based on noise scale factor
NASA Astrophysics Data System (ADS)
Wang, Huan-Xue; Liu, Jian-Guo; Zhang, Tian-Shu
2015-08-01
Estimation of random errors, which are due to shot noise of photomultiplier tube (PMT) or avalanche photodiode (APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship, noise scale factor (NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable. Project supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB05040300) and the National Natural Science Foundation of China (Grant No. 41205119).
A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation
NASA Technical Reports Server (NTRS)
Tessler, A.; Riggs, H. R.; Dambach, M.
1998-01-01
A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.
Houle, D; Meyer, K
2015-08-01
We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende , Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende
2013-10-15
The development of a volumetric apparatus (also known as a Sieverts’ apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.
Ricardo H. Nochetto
1985-01-01
The enthalpy formulation of two-phase Stefan problems, with linear boundary conditions, is approximated by C0-piecewise linear finite elements in space and backward-differences in time combined with a regularization procedure. Error\\u000a estimates of L2-type are obtained. For general regularized problems an order ?1\\/2 is proved, while the order is shown to be ? for non-degenerate cases. For discrete problems an order
Errors in Estimating River Discharge from Remote Sensing based on Manning's Equation
Washington at Seattle, University of
Errors in Estimating River Discharge from Remote Sensing based on Manning's Equation Elizabeth A/3 S1/2 (1) Q = 1 n wz5/3 S1/2 (2) Q = 1 n w z0 + dz( ) 5/3 S1/2 (3) Q = discharge (m3/s) n = Manning's roughness coefficient A = cross-sectional area of channel (m2) R = hydraulic radius = A/P (m) P = wetted
Estimating Antenna-Pointing Error Using A Focal-Plane Array
NASA Technical Reports Server (NTRS)
Zohar, Shalhav; Vilnrotter, Victor A.
1994-01-01
Common method of determining residual errors in pointing of paraboloidal-reflector microwave antennas involves constantly dithering antenna mechanically about estimated direction of source. For cases where expense of additional focal-plane collecting horns (and their amplifiers) justified, new method eliminates mechanical dithering. Outputs of multiple receiving feed horns processed to extract phase information indicative of direction of arrival of signal received from distant source.
Error covariance analysis of a Karhunen-Loeve random field estimator
S. N. Gupta
1985-01-01
An error covariance analysis of a two-dimensional Karhunen-Loeve random field estimator (KLE) for gridded gravity data has been carried out without actually using the data. The analysis is based on the KLE developed by Bose (1983) for gravity compensation in an inertial measurement navigation system. The continuous gain coefficients Beta(mnj) and the continuous KL gain vector K(mn) were calculated using
NASA Technical Reports Server (NTRS)
Lu, Hui-Ling; Cheng, Victor H. L.; Leitner, Jesse A.; Carpenter, Kenneth G.
2004-01-01
Long-baseline space interferometers involving formation flying of multiple spacecraft hold great promise as future space missions for high-resolution imagery. The major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to control these spacecraft and their optics payloads in the specified configuration accurately. In this paper, we describe our effort toward fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present an estimation procedure that effectively extracts relative x/y translational exit pupil aperture deviations from the raw interferometric image with small estimation errors.
Møller, H; Richards, S; Hanchett, N; Riaz, S P; Lüchtenborg, M; Holmberg, L; Robinson, D
2011-01-01
Background: It has been suggested that cancer registries in England are too dependent on processing of information from death certificates, and consequently that cancer survival statistics reported for England are systematically biased and too low. Methods: We have linked routine cancer registration records for colorectal, lung, and breast cancer patients with information from the Hospital Episode Statistics (HES) database for the period 2001–2007. Based on record linkage with the HES database, records missing in the cancer register were identified, and dates of diagnosis were revised. The effects of those revisions on the estimated survival time and proportion of patients surviving for 1 year or more were studied. Cases that were absent in the cancer register and present in the HES data with a relevant diagnosis code and a relevant surgery code were used to estimate (a) the completeness of the cancer register. Differences in survival times calculated from the two data sources were used to estimate (b) the possible extent of error in the recorded survival time in the cancer register. Finally, we combined (a) and (b) to estimate (c) the resulting differences in 1-year cumulative survival estimates. Results: Completeness of case ascertainment in English cancer registries is high, around 98–99%. Using HES data added 1.9%, 0.4% and 2.0% to the number of colorectal, lung, and breast cancer registrations, respectively. Around 5–6% of rapidly fatal cancer registrations had survival time extended by more than a month, and almost 3% of rapidly fatal breast cancer records were extended by more than a year. The resulting impact on estimates of 1-year survival was small, amounting to 1.0, 0.8, and 0.4 percentage points for colorectal, lung, and breast cancer, respectively. Interpretation: English cancer registration data cannot be dismissed as unfit for the purpose of cancer survival analysis. However, investigators should retain a critical attitude to data quality and sources of error in international cancer survival studies. PMID:21559016
A homogeneous linear estimation method for system error in data assimilation
NASA Astrophysics Data System (ADS)
Wu, Wei; Wu, Zengmao; Gao, Shanhong; Zheng, Yi
2013-09-01
In this paper, a new bias estimation method is proposed and applied in a regional ensemble Kalman filter (EnKF) based on the Weather Research and Forecasting (WRF) Model. The method is based on a homogeneous linear bias model, and the model bias is estimated using statistics at each assimilation cycle, which is different from the state augmentation methods proposed in previous literatures. The new method provides a good estimation for the model bias of some specific variables, such as sea level pressure (SLP). A series of numerical experiments with EnKF are performed to examine the new method under a severe weather condition. Results show the positive effect of the method on the forecasting of circulation pattern and meso-scale systems, and the reduction of analysis errors. The background error covariance structures of surface variables and the effects of model system bias on EnKF are also studied under the error covariance structures and a new concept `correlation scale' is introduced. However, the new method needs further evaluation with more cases of assimilation.
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.
2012-06-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.
Application of least squares variance component estimation to errors-in-variables models
NASA Astrophysics Data System (ADS)
Amiri-Simkooei, A. R.
2013-11-01
In an earlier work, a simple and flexible formulation for the weighted total least squares (WTLS) problem was presented. The formulation allows one to directly apply the existing body of knowledge of the least squares theory to the errors-in-variables (EIV) models of which the complete description of the covariance matrices of the observation vector and of the design matrix can be employed. This contribution presents one of the well-known theories—least squares variance component estimation (LS-VCE)—to the total least squares problem. LS-VCE is adopted to cope with the estimation of different variance components in an EIV model having a general covariance matrix obtained from the (fully populated) covariance matrices of the functionally independent variables and a proper application of the error propagation law. Two empirical examples using real and simulated data are presented to illustrate the theory. The first example is a linear regression model and the second example is a 2-D affine transformation. For each application, two variance components—one for the observation vector and one for the coefficient matrix—are simultaneously estimated. Because the formulation is based on the standard least squares theory, the covariance matrix of the estimates in general and the precision of the estimates in particular can also be presented.
Error analysis of leaf area estimates made from allometric regression models
NASA Technical Reports Server (NTRS)
Feiveson, A. H.; Chhikara, R. S.
1986-01-01
Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).
NASA Astrophysics Data System (ADS)
Li, Y.; Ryu, D.; Western, A. W.; Wang, Q.; Robertson, D.; Crow, W. T.
2013-12-01
Timely and reliable streamflow forecasting with acceptable accuracy is fundamental for flood response and risk management. However, streamflow forecasting models are subject to uncertainties from inputs, state variables, model parameters and structures. This has led to an ongoing development of methods for uncertainty quantification (e.g. generalized likelihood and Bayesian approaches) and methods for uncertainty reduction (e.g. sequential and variational data assimilation approaches). These two classes of methods are distinct yet related, e.g., the validity of data assimilation is essentially determined by the reliability of error specification. Error specification has been one of the most challenging areas in hydrologic data assimilation and there is a major opportunity for implementing uncertainty quantification approaches to inform both model and observation uncertainties. In this study, ensemble data assimilation methods are combined with the maximum a posteriori (MAP) error estimation approach to construct an integrated error estimation and data assimilation scheme for operational streamflow forecasting. We contrast the performance of two different data assimilation schemes: a lag-aware ensemble Kalman smoother (EnKS) and the conventional ensemble Kalman filter (EnKF). The schemes are implemented for a catchment upstream of Myrtleford in the Ovens river basin, Australia to assimilate real-time discharge observations into a conceptual catchment model, modèle du Génie Rural à 4 paramètres Horaire (GR4H). The performance of the integrated system is evaluated in both a synthetic forecasting scenario with observed precipitation and an operational forecasting scenario with Numerical Weather Prediction (NWP) forecast rainfall. The results show that the error parameters estimated by the MAP approach generates a reliable spread of streamflow prediction. Continuous state updating reduces uncertainty in initial states and thereby improves the forecasting accuracy significantly. The EnKS streamflow forecasts are more accurate and reliable than the EnKF for the synthetic scenario. They also alleviate instability in the EnKF due to overcorrection of current state variables. For the operational forecasting case, the forecasts benefit less from state updating and the difference between the EnKS and EnKF becomes less significant. This is because the uncertainty in the NWP rainfall forecasts becomes dominant with increasing lead time. Forecast discharge in 2010. Solid curves are observations and gray areas indicate 95% of probabilistic forecasts. (a) openloop ensemble spread based on the error parameters estimated by the MAP; (b) 60-h lead time forecasts based on the EnKS.
Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.
2007-01-01
When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge-Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n(-1/(p?4)), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064
Xue, Hongqi; Miao, Hongyu; Wu, Hulin
2010-01-01
This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n?1/(p?4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064
NASA Astrophysics Data System (ADS)
Moya Quiroga, Vladimir; Mano, Akira; Asaoka, Yoshihiro; Udo, Keiko; Kure, Shuichi; Mendoza, Javier
2013-04-01
Precipitation is a major component of the water cycle that returns atmospheric water to the ground. Without precipitation there would be no water cycle, all the water would run down the rivers and into the seas, then the rivers would dry up with no fresh water from precipitation. Although precipitation measurement seems an easy and simple procedure, it is affected by several systematic errors which lead to underestimation of the actual precipitation. Hence, precipitation measurements should be corrected before their use. Different correction approaches were already suggested in order to correct precipitation measurements. Nevertheless, focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this presentation we propose a Bayesian model average (BMA) approach for correcting rain gauge measurement errors. In the present study we used meteorological data recorded every 10 minutes at the Condoriri station in the Bolivian Andes. Comparing rain gauge measurements with totalisators rain measurements it was possible to estimate the rain underestimation. First, different deterministic models were optimized for the correction of precipitation considering wind effect and precipitation intensities. Then, probabilistic BMA correction was performed. The corrected precipitation was then separated into rainfall and snowfall considering typical Andean temperature thresholds of -1°C and 3°C. Hence, precipitation was separated into rainfall, snowfall and mixed precipitation. Then, relating the total snowfall with the glacier ice density, it was possible to estimate the glacier accumulation. Results show a yearly glacier accumulation of 1200 mm/year. Besides, results confirm that in tropical glaciers winter is not accumulation period, but a low ablation one. Results show that neglecting such correction may induce an underestimation higher than 35 % of total precipitation. Besides, the uncertainty range may induce differences up to 200 mm/year. This research is developed within the GRANDE project (Glacier Retreat impact Assessment and National policy Development), financed by SATREPS from JST-JICA.
NASA Technical Reports Server (NTRS)
Sparks, Lawrence
2013-01-01
Current satellite-based augmentation systems estimate ionospheric delay using algorithms that assume the electron density of the ionosphere is non-negligible only in a thin shell located near the peak of the actual profile. In its initial operating capability, for example, the Wide Area Augmentation System incorporated the thin shell model into an estimation algorithm that calculates vertical delay using a planar fit. Under disturbed conditions or at low latitude where ionospheric structure is complex, however, the thin shell approximation can serve as a significant source of estimation error. A recent upgrade of the system replaced the planar fit algorithm with an algorithm based upon kriging. The upgrade owes its success, in part, to the ability of kriging to mitigate the error due to this approximation. Previously, alternative delay estimation algorithms have been proposed that eliminate the need for invoking the thin shell model altogether. Prior analyses have compared the accuracy achieved by these methods to the accuracy achieved by the planar fit algorithm. This paper extends these analyses to include a comparison with the accuracy achieved by kriging. It concludes by examining how a satellite-based augmentation system might be implemented without recourse to the thin shell approximation.
2012-01-01
Background Presented is the method “Detection and Outline Error Estimates” (DOEE) for assessing rater agreement in the delineation of multiple sclerosis (MS) lesions. The DOEE method divides operator or rater assessment into two parts: 1) Detection Error (DE) -- rater agreement in detecting the same regions to mark, and 2) Outline Error (OE) -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI) values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR) images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA) of the raters' Region of Interests (ROIs). Results When correlated with MTA, neither DE (??=?.056, p=.83) nor the ratio of OE to MTA (??=?.23, p=.37), referred to as Outline Error Rate (OER), exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (??=?.75, p?
Practical Error Estimates for Reynolds' Lubrication Approximation and its Higher Order Corrections
Jon Wilkening
2010-06-09
Reynolds' lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds' equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio $\\epsilon$ of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, $x$-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function $h(x)$ describing the geometry, or depend on $h$ and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order $2k$, the error is $O(\\epsilon^{2k+2})$ and $h$ enters into the error bound only through its first and third inverse moments $\\int_0^1 h(x)^{-m} dx$, $m=1,3$ and via the max norms $\\big\\|\\frac{1}{\\ell!} h^{\\ell-1} \\partial_x^\\ell h\\big\\|_\\infty$, $1\\le\\ell\\le2k+2$. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when $h$ is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Catastrophic Photo-z Errors and the Dark Energy Parameter Estimates with Cosmic Shear
L. Sun; Z. -H. Fan; C. Tao; J. -P. Kneib; S. Jouvel; A. Tilquin
2009-04-28
We study the impact of catastrophic errors occurring in the photometric redshifts of galaxies on cosmological parameter estimates with cosmic shear tomography. We consider a fiducial survey with 9-filter set and perform photo-z measurement simulations. It is found that a fraction of 1% galaxies at z_{spec}~0.4 is misidentified to be at z_{phot}~3.5. We then employ both chi^2 fitting method and the extension of Fisher matrix formalism to evaluate the bias on the equation of state parameters of dark energy, w_0 and w_a, induced by those catastrophic outliers. By comparing the results from both methods, we verify that the estimation of w_0 and w_a from the fiducial 5-bin tomographic analyses can be significantly biased. To minimize the impact of this bias, two strategies can be followed: (A) the cosmic shear analysis is restricted to 0.5catastrophic redshift errors are expected to be insignificant; (B) a spectroscopic survey is conducted for galaxies with 3catastrophic redshift errors (assuming a 9-filter photometric survey) and A is the survey area. For A=1000 {deg}^2, we find that N_{spec}>320 and 860 respectively in order to reduce the joint bias in (w_0,w_a) to be smaller than 2\\sigma and 1\\sigma. This spectroscopic survey (option B) will improve the Figure of Merit of option A by a factor \\times 1.5 thus making such a survey strongly desirable.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
Error Analysis for Estimation of Greenland Ice Sheet Accumulation Rates from InSAR Data
NASA Astrophysics Data System (ADS)
Chen, A. C.; Zebker, H. A.
2013-12-01
Forming a mass budget for the Greenland Ice Sheet requires accurate measurements of both accumulation and ablation. Currently, most mass budgets use accumulation rate data from sparse in-situ ice core data, sometimes in conjunction with results from relatively low-resolution climate models. Yet there have also been attempts to estimate accumulation rates from remote sensing data, including SAR, InSAR, and satellite radar scatterometry data. However, the sensitivities, error sources, and confidence intervals in these remote sensing methods have not been well-characterized. We develop an error analysis for estimates of Greenland Ice Sheet accumulation rates in the dry-snow zone using SAR brightness and InSAR coherence data. The estimates are generated by inverting a forward model based on firn structure and electromagnetic scattering. We can then examine the associated error bars and sensitivity. We also model how these change when spatial smoothness assumptions are introduced and a regularized inversion is used. In this study, we use SAR and InSAR data from the L-band ALOS-PALSAR instrument (23-centimeter carrier wavelength) as a test-bed and in-situ measurements published by Bales et.al. for comparison [1]. Finally, we use simulations to examine the ways in which estimation accuracy varies between X-band, C-band and L-band experiments. [1] R. C. Bales, et.al. 'Accumulation over the Greenland ice sheet from historical and recent records,' Journal of Geophysical Research, vol. 106, pp. 33813-33825, 2001.
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
Point and standard error estimation for quantiles of mixed flood distributions
NASA Astrophysics Data System (ADS)
Grego, John M.; Yates, Philip A.
2010-09-01
SummaryThis paper explores the use of finite mixture models in the study of flood frequency distributions with multiple components. It focuses on further methodological developments for finite mixture models, including an accelerated version of the EM algorithm to derive both parameter estimates and the observed information matrix, estimation of the .99 quantile of the mixture distribution, and standard error estimation of the quantile. In case studies, the lognormal finite mixture model will be compared to other models - specifically the widely-used log-Pearson Type III distribution, the Gumbel distribution, and the GEV distribution. A multidimensional gradient plot and information criteria will be discussed as diagnostics for the number of mixing components.
Jenkins, Helen E.; Tolman, Arielle W.; Yuen, Courtney M.; Parr, Jonathan B.; Keshavjee, Salmaan; Pérez-Vélez, Carlos M.; Pagano, Marcello; Becerra, Mercedes C.; Cohen, Ted
2014-01-01
Background Multidrug-resistant tuberculosis (MDR-TB) threatens to reverse recent reductions in global tuberculosis (TB) incidence. Although children under 15 years of age constitute >25% of the worldwide population, the global incidence of MDR-TB disease in children has never been quantified. Methods Our approach for estimating regional and global annual incidence of MDR-TB in children required development of two models: one to estimate the setting-specific risk of MDR-TB among child TB cases, and a second to estimate the setting-specific incidence of TB disease in children. The model for MDR-TB risk among children with TB required a systematic literature review. We multiplied the setting-specific estimates of MDR-TB risk and TB incidence to estimate regional and global incidence of MDR-TB disease in children in 2010. Findings We identified 3,403 papers, of which 97 studies met inclusion criteria for the systematic review of MDR-TB risk. Thirty-one studies reported the risk of MDR-TB among both children and treatment-naïve adults with TB and were used for evaluating the linear association between MDR-TB risk in these two patient groups. We found that the setting-specific risk of MDR-TB was nearly identical in children and treatment-naïve adults with TB, consistent with the assertion that MDR-TB in both groups reflects the local risk of transmitted MDR-TB. Applying these calculated risks, we estimated that around 1,000,000 (95% Confidence Interval: 938,000 – 1,055,000) children developed TB disease in 2010, among whom 32,000 (95% Confidence Interval: 26,000 – 39,000) had MDR-TB. Interpretation Our estimates highlight a massive detection gap for children with TB and MDR-TB disease. Future estimates can be refined as more and better TB data and new diagnostic tools become available. PMID:24671080
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (principal investigators)
1978-01-01
The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.
Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.
2015-01-01
We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg?1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25624464
NASA Astrophysics Data System (ADS)
Mudge, Jason; Virgen, Miguel
2011-10-01
In 2009, we presented a compact division of amplitude imaging polarimeter design that captures the complete Stokes parameters simultaneously. The advantages of this design are 1) reduced sensitivity to noise based on the optimized selection of polarization elements, 2) minimization of differential aberrations in the four polarimetric channels, and 3) reduction in image registration errors. A prototype polarimeter was integrated for the near-infrared wavelength, field tested, and shown to calculate and display estimated scene polarization in real time. In this paper, several data sets are presented showing the instruments unique remote detection capabilities from various platforms and scene types. Acquisitions include ground view, aerial view of an urban area, and K-model rocket plume imaging at 2.5 kilometers. The acquisitions were processed using several new polarimetric imaging techniques detailing the unique remote detection capability. This polarimeter design was driven by the requirement to increase the accuracy of Stokes estimation. Therefore, this paper will conclude with the precision or an estimate of errors associated with this particular instrument.
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Yoshida, Naoki
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ~1400 deg2 will constrain the dark energy equation of the state parameter with an error of ?w 0 ~ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density \\Omega _m0 = 0.256+/- ^{0.054}_{0.046}.