Note: This page contains sample records for the topic estimated systematic error from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: August 15, 2014.
1

Systematic errors in estimation of insolation by empirical formulas  

Microsoft Academic Search

Systematic errors in the estimation of surface, insolation,Q, by two popular empirical formulas are investigated statistically by using coincident measurements of the global solar radiation\\u000a and the total cloud cover at JMA observatories over Japan. The results show that Reed’s (1977) widely-accepted formula remarkably\\u000a overestimatesQ under overcast conditions. The overestimation is particularly evident in the summer months. The formula also

Shoichi Kizu

1998-01-01

2

Correcting Instrumental Variables Estimators for Systematic Measurement Error  

PubMed Central

Instrumental variables (IV) estimators are well established to correct for measurement error on exposure in a broad range of fields. In a distinct prominent stream of research IV’s are becoming increasingly popular for estimating causal effects of exposure on outcome since they allow for unmeasured confounders which are hard to avoid. Because many causal questions emerge from data which suffer severe measurement error problems, we combine both IV approaches in this article to correct IV-based causal effect estimators in linear (structural mean) models for possibly systematic measurement error on the exposure. The estimators rely on the presence of a baseline measurement which is associated with the observed exposure and known not to modify the target effect. Simulation studies and the analysis of a small blood pressure reduction trial (n = 105) with treatment noncompliance confirm the adequate performance of our estimators in finite samples. Our results also demonstrate that incorporating limited prior knowledge about a weakly identified parameter (such as the error mean) in a frequentist analysis can yield substantial improvements.

Vansteelandt, Stijn; Babanezhad, Manoochehr; Goetghebeur, Els

2008-01-01

3

Estimates of the systematic errors of the optoelectronic measuring equipment of navigation systems  

Microsoft Academic Search

A method is proposed for monitoring and estimating the systematic errors of measurements conducted by the optoelectronic measuring package of astronavigation systems. The principle of the method, which takes advantage of the high stability of the relative position of navigation stars, is discussed. By using a priori information about the position of navigation stars, methods for monitoring and estimating systematic

A. D. Goliakov; V. V. Serogodskii

1990-01-01

4

Estimates of the systematic errors of the optoelectronic measuring equipment of navigation systems  

NASA Astrophysics Data System (ADS)

A method is proposed for monitoring and estimating the systematic errors of measurements conducted by the optoelectronic measuring package of astronavigation systems. The principle of the method, which takes advantage of the high stability of the relative position of navigation stars, is discussed. By using a priori information about the position of navigation stars, methods for monitoring and estimating systematic errors can be developed also for other types of systems relying on the measurement of the angular coordinates of two or more stars.

Goliakov, A. D.; Serogodskii, V. V.

1990-08-01

5

A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation  

PubMed Central

The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10?5 pixels.

Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan

2011-01-01

6

A novel systematic error compensation algorithm based on least squares support vector regression for star sensor image centroid estimation.  

PubMed

The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10(-5) pixels. PMID:22164021

Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan

2011-01-01

7

Systematic Measurement Error in the Estimation of Discretionary Accruals: An Evaluation of Alternative Modelling Procedures  

Microsoft Academic Search

This paper evaluates the extent of predictable measurement error induced by five alternative approaches to the estimation of discretionary accruals. The source and magnitude of the error is assessed by reference to the strength of the association between the discretionary accrual estimate and proxies for the non-discretionary components of total accruals. Results indicate that discretionary accruals generated by the Healy

Steven Young

1999-01-01

8

Systematic estimation of forecast and observation error covariances in four-dimensional data assimilation  

NASA Technical Reports Server (NTRS)

A two-part algorithm is presented for reliably computing weather forecast model and observational error covariances during data assimilation. Data errors arise from instrumental inaccuracies and sub-grid scale variability, whereas forecast errors occur because of modeling errors and the propagation of previous analysis errors. A Kalman filter is defined as the primary algorithm for estimating the forecast and analysis error convariance matrices. A second algorithm is described for quantifying the noise covariance matrices of any degree to obtain accurate values for the observational error covariances. Numerical results are provided from a linearized one-dimensional shallow-water model. The results cover observational noise covariances, initial instrumental errors and erroneous model values.

Dee, D. P.; Cohn, S. E.; Ghil, M.

1985-01-01

9

Effects of Systematic and Random Errors on the Spatial Scaling Properties in Radar-Estimated Rainfall  

Microsoft Academic Search

Spatial scaling properties of precipitation fields are often investigated based on radar data. However, little is known about\\u000a the effects of the considerable uncertainties present in radar-rainfall products on the estimated multifractal parameters.\\u000a The basic systematic factors that can affect the results of such analyses include the selection of a Z-R relationship, the\\u000a rain\\/no-rain reflectivity threshold, and the distance from

Gabriele Villarini; Grzegorz J. Ciach; Witold F. Krajewski; Keith M. Nordstrom; Vijay K. Gupta

10

Estimation of Aircraft Dynamic States and Instrument Systematic Errors from Flight Test Measurements Using the Carlson Square Root Formulation of the Kalman Filter.  

National Technical Information Service (NTIS)

The development of a procedure for estimating aircraft dynamic states and instrument systematic errors from flight test measurements is described. The method has particular application in non-steady performance estimation for reconstructing aircraft fligh...

C. A. Martin

1980-01-01

11

A statistical model for systematic errors  

Microsoft Academic Search

On the basis of a statistical model for systematic errors a procedure has been developed for estimating the most probable\\u000a value of a physical quantity when the available information comes from different sources, i.e., from measurements published\\u000a by different authors. This model differs from those commonly used in allowing for the presence of systematic errors in individual\\u000a measurements. However, it

Z. T. B?dy; K. M. Dede

1970-01-01

12

Swing-arm optical coordinate measuring machine: modal estimation of systematic errors from dual probe shear measurements  

NASA Astrophysics Data System (ADS)

The swing-arm optical coordinate measuring machine (SOC), a profilometer with a distance-measuring interferometric probe for in situ measurement of the topography of aspheric surfaces,has been used for measuring highly aspheric mirrors with a performance rivaling full aperture interferometric tests. Recently, we implemented a dual probe, self-calibration mode for the SOC. Data from the dual probes can be used to calibrate the swing-arm air bearing errors since both probes see the same bearing errors while measuring different portions of the test surface. Bearing errors are reconstructed from modal estimation of the sheared signal.

Su, Peng; Parks, Robert E.; Wang, Yuhao; Oh, Chang Jin; Burge, James H.

2012-04-01

13

Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model.  

PubMed

The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles. PMID:24258878

Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

2013-11-01

14

Estimating Bias Error Distributions  

NASA Technical Reports Server (NTRS)

This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

Liu, Tian-Shu; Finley, Tom D.

2001-01-01

15

Bolstered error estimation  

Microsoft Academic Search

Abstract: We proposeagT"kxdmethodfor error estimation that displays low varianceandgcedIIIT low bias as well. This methodis based on"bolsteringtheorig#G" empirical distribution of the data. It has a directgrectdIk interpretation and can beeasily appliedto any classi#cation rule andany number of classes. This methodcan be usedto improve the performance ofanyerror-countingestimation method, such as resubstitution and all cross-validation estimators, particularly in small-samplesettingsWe ...

Ulisses Braga-neto; Edward R. Dougherty

2004-01-01

16

Estimating GPS Positional Error  

NSDL National Science Digital Library

After instructing students on basic receiver operation, each student will make many (10-20) position estimates of 3 benchmarks over a week. The different benchmarks will have different views of the skies or vegetation cover. Each student will download their data into a spreadsheet and calculate horizontal and vertical errors which are collated into a class spreadsheet. The positions are sorted by error and plotted in a cumulative frequency plot. The students are encouraged to discuss the distribution, sources of error, and estimate confidence intervals. This exercise gives the students a gut feeling for confidence intervals and the accuracy of data. Students are asked to compare results from different types of data and benchmarks with different views of the sky. Uses online and/or real-time data Has minimal/no quantitative component Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions

Witte, Bill

17

Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry  

NASA Astrophysics Data System (ADS)

The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.

Pradel, N.; Charlot, P.; Lestrade, J.-F.

2005-12-01

18

A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers  

SciTech Connect

The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Rd. WT15, Richardson, Texas 75080 (United States)

2009-05-15

19

Systematic Errors in Black Hole Mass Measurements  

NASA Astrophysics Data System (ADS)

Compilations of stellar- and gas-dynamical measurements of supermassive black holes are often assembled without quantifying systematic errors from various assumptions in the dynamical modeling processes. Using a simple Monte-Carlo approach, I will discuss the level to which different systematic effects could bias scaling relations between black holes and their host galaxies. Given that systematic errors will not be eradicated in the near future, how wrong can we afford to be?

McConnell, Nicholas J.

2014-01-01

20

Systematic and statistical errors in a Bayesian approach to the estimation of the neutron-star equation of state using advanced gravitational wave detectors  

NASA Astrophysics Data System (ADS)

Advanced ground-based gravitational-wave detectors are capable of measuring tidal influences in binary neutron-star systems. In this work, we report on the statistical uncertainties in measuring tidal deformability with a full Bayesian parameter estimation implementation. We show how simultaneous measurements of chirp mass and tidal deformability can be used to constrain the neutron-star equation of state. We also study the effects of waveform modeling bias and individual instances of detector noise on these measurements. We notably find that systematic error between post-Newtonian waveform families can significantly bias the estimation of tidal parameters, thus motivating the continued development of waveform models that are more reliable at high frequencies.

Wade, Leslie; Creighton, Jolien D. E.; Ochsner, Evan; Lackey, Benjamin D.; Farr, Benjamin F.; Littenberg, Tyson B.; Raymond, Vivien

2014-05-01

21

Error correction in adders using systematic subcodes.  

NASA Technical Reports Server (NTRS)

A generalized theory is presented for the construction of a systematic subcode for a given AN code in such a way that error control properties of the AN code are preserved in this new code. The 'systematic weight' and 'systematic distance' functions in this new code depend not only on its number representation system but also on its addition structure. Finally, to illustrate this theory, a simple error-correcting adder organization using a systematic subcode of 29 N code is sketched in some detail.

Rao, T. R. N.

1972-01-01

22

Error estimates of theoretical models: a guide  

NASA Astrophysics Data System (ADS)

This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical-error estimates, strategies to assess systematic errors, and show how to uncover inter-dependences by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

Dobaczewski, J.; Nazarewicz, W.; Reinhard, P.-G.

2014-07-01

23

Measuring Systematic Error with Curve Fits  

ERIC Educational Resources Information Center

Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

Rupright, Mark E.

2011-01-01

24

A study on systematic errors concerning rotor position estimation of PMSM based on back EMF voltage observation  

Microsoft Academic Search

Many published methods of rotor position estimation of permanent magnet synchronous machines (PMSM) for medium to high speed are based on the estimation of the back electromotive force (EMF) voltage. From a practical point of view and the demand of minimum complexity, a simple idealized motor model is used many times. Due to this simplification and to a change of

P. Hutterer; H. Grabner; S. Silber; W. Amrhein; W. Schaefer

2009-01-01

25

Antenna pointing systematic error model derivations  

NASA Technical Reports Server (NTRS)

The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

Guiar, C. N.; Lansing, F. L.; Riggs, R.

1987-01-01

26

The behavior of magnitude dependent systematic errors  

NASA Technical Reports Server (NTRS)

The objective grating technique is discussed. From the investigation of actual material, it is established that the effect of coma varies significantly from plate to plate, also that a model linear in diameter and/or coordinate may be inadequate for its removal, and even introduce systematic errors. From this standpoint, the requirements of reference star systems are discussed.

Eichhorn, H.

1971-01-01

27

Antenna Pointing Systematic Error Model Derivations.  

National Technical Information Service (NTIS)

The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimatio...

C. N. Guiar F. L. Lansing R. Riggs

1987-01-01

28

Mars gravitational field estimation error  

NASA Technical Reports Server (NTRS)

The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

Compton, H. R.; Daniels, E. F.

1972-01-01

29

Exploring Systematic Error With Digital Video  

NASA Astrophysics Data System (ADS)

Digital video acquisition and analysis has become user-friendly and affordable with the advent of webcams and software such as Vernier’s LoggerPro. The analysis of the ballistic trajectory of a ball has been a fixture in the physics lab curriculum at Swarthmore College for many years. When we updated the experiment to use digital video rather than stroboscopic photography, the data acquisition and analysis became almost trivial. We realized this gave us the opportunity to use an experiment with very precise data and a well-defined expected result (the acceleration due to gravity, g) to teach about systematic error and ways to correct for it. In this paper we describe modifications to the lab procedure that demonstrate systematic errors due to calibration, perspective effects, and air drag.

Klassen, M. A. H.; Bloom, P. C.

2006-12-01

30

Numerical Error Estimation with UQ  

NASA Astrophysics Data System (ADS)

Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

Ackmann, Jan; Korn, Peter; Marotzke, Jochem

2014-05-01

31

Reducing Systematic Error in Weak Lensing Cluster Surveys  

NASA Astrophysics Data System (ADS)

Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing ?-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ~3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ~2000 peaks based on our Subaru fields. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.

Utsumi, Yousuke; Miyazaki, Satoshi; Geller, Margaret J.; Dell'Antonio, Ian P.; Oguri, Masamune; Kurtz, Michael J.; Hamana, Takashi; Fabricant, Daniel G.

2014-05-01

32

Estimating climate model systematic errors in a climate change impact study of the Okavango River basin, southwestern Africa using a mesoscale model  

NASA Astrophysics Data System (ADS)

Simulating the impact of future climate variability and change on hydrological systems requires estimates of climate at high spatial resolution compatible with hydrological models. Here we present initial results of a project to simulate future climate over the Okavango River basin and delta in Southwestern Africa. Given the significance of the delta to biodiversity and as a resource to the local population, there is considerable concern regarding the sensitivity of the system to future climate change. An important component of climate variability/change impact studies is an assessment of errors in the modeling suite. Here, we attempt to quantify errors and uncertainties involved in regional climate modelling that will impact on hydrological simulations. The study determines the ability of the MM5 Regional Climate Model to simulate the present day regional climate at the high resolution required by the hydrological models and the effectiveness of the RCM in downscaling GCM outputs to study regional climate change and impacts.

Raghavan, S. V.; Todd, M.

2007-12-01

33

More on Systematic Error in a Boyle's Law Experiment  

NASA Astrophysics Data System (ADS)

A recent article1 in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

McCall, Richard P.

2012-01-01

34

More on Systematic Error in a Boyle's Law Experiment  

ERIC Educational Resources Information Center

A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

McCall, Richard P.

2012-01-01

35

Analysis of Systematic Errors in the Computation of Exposure Distributions from Brachytherapy Sources  

Microsoft Academic Search

Systematic errors in the basic physical constants and algorithms used in calculating the exposure from brachytherapy sources have been investigated. The specific gamma ray constant for radium filtered by 0.5 mm of platinum, 10% irridium, is the fundamental constant in this field. It has been measured a number of times over the years, with estimated bounds for the systematic error

Martin Rozenfeld

1980-01-01

36

A posteriori error estimator and error control for contact problems  

NASA Astrophysics Data System (ADS)

In this paper, we consider two error estimators for one-body contact problems. The first error estimator is defined in terms of H( div ) -conforming stress approximations and equilibrated fluxes while the second is a standard edge-based residual error estimator without any modification with respect to the contact. We show reliability and efficiency for both estimators. Moreover, the error is bounded by the first estimator with a constant one plus a higher order data oscillation term plus a term arising from the contact that is shown numerically to be of higher order. The second estimator is used in a control-based AFEM refinement strategy, and the decay of the error in the energy is shown. Several numerical tests demonstrate the performance of both estimators.

Weiss, Alexander; Wohlmuth, Barbara I.

2009-09-01

37

Control by model error estimation  

NASA Technical Reports Server (NTRS)

Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

Likins, P. W.; Skelton, R. E.

1976-01-01

38

Systematic pointing errors of a 22-m radio telescope  

NASA Astrophysics Data System (ADS)

A means of analyzing and calculating the systematic pointing errors of the Crimean Astrophysical Observatory RT-22 radio telescope using discrete radio source data is proposed. It is shown that the derived analytical expressions are adequate for error approximation. The standard deviation after allowance for systematic errors is about 10 arcsec with respect to both coordinates; the pointing deviations are distributed randomly. The telescope's vertical axis inclination is measured, and the coordinates of its astronomical position are determined with greater precision. Factors leading to systematic pointing error instabilities are established.

Nesterov, N. S.

39

Efficient error estimating coding: feasibility and applications  

Microsoft Academic Search

Motivated by recent emerging systems that can leverage partially correct packets in wireless networks, this paper investigates the novel concept of error estimating codes (EEC). Without correcting the errors in the packet, EEC enables the receiver of the packet to estimate the packet's bit error rate, which is perhaps the most important meta-information of a partially correct packet. Our EEC

Binbin Chen; Ziling Zhou; Yuda Zhao; Haifeng Yu

2010-01-01

40

Evaluating Spontaneous Communication through Systematic Error Analysis.  

ERIC Educational Resources Information Center

Presents a method for classifying errors of second language learners for the purpose of: (1) evaluating information in samples of students' communication, (2) diagnosing individual learner needs, (3) developing individualized instructional materials, and (4) deciding which errors to correct first. Suggests applications of the method in…

Hendrickson, James M.

1979-01-01

41

Improved Systematic Pointing Error Model for the DSN Antennas  

NASA Technical Reports Server (NTRS)

New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

2011-01-01

42

About systematic errors in charge-density studies.  

PubMed

The formerly introduced theoretical R values [Henn & Schönleber (2013). Acta Cryst. A69, 549-558] are used to develop a relative indicator of systematic errors in model refinements, R(meta), and applied to published charge-density data. The counter of R(meta) gives an absolute measure of systematic errors in percentage points. The residuals (Io - Ic)/?(Io) of published data are examined. It is found that most published models correspond to residual distributions that are not consistent with the assumption of a Gaussian distribution. The consistency with a Gaussian distribution, however, is important, as the model parameter estimates and their standard uncertainties from a least-squares procedure are valid only under this assumption. The effect of correlations introduced by the structure model is briefly discussed with the help of artificial data and discarded as a source of serious correlations in the examined example. Intensity and significance cutoffs applied in the refinement procedure are found to be mechanisms preventing residual distributions from becoming Gaussian. Model refinements against artificial data yield zero or close-to-zero values for R(meta) when the data are not truncated and small negative values in the case of application of a moderate cutoff Io > 0. It is well known from the literature that the application of cutoff values leads to model bias [Hirshfeld & Rabinovich (1973). Acta Cryst. A29, 510-513]. PMID:24815974

Henn, Julian; Meindl, Kathrin

2014-05-01

43

Systematic error from nonlinear least squares computer fitting of Lorentzian lines  

Microsoft Academic Search

We show using Monte Carlo methods that serious systematic error may be introduced in the nonlinear least-squares (NLLSQ) fitting of Lorentzian line spectra of poor statistical quality or with too few data points in the lines. For such spectra we have found that the NLLSQ estimates of Lorentzian linewidth error are unreliable. This has the result of causing the weighted

Loren Pfeiffer; C. P. Lichtenwalner

1974-01-01

44

Systematic lossy forward error protection for video waveforms  

Microsoft Academic Search

A novel scheme for error-resilient digital video broadcasting. using Wyner-Ziv coding, is presented in this paper. We apply the general framework of systematic lossy source-channel wding to generate a supplementary bitstream that can correct transmission errors in lhe decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-wded bitstream. which is transmined over

Anne Aaron; Shantanu Rane; David Rebollo-monedero; Bernd Girod

2003-01-01

45

Systematic lossy forward error protection for error-resilient digital video broadcasting  

Microsoft Academic Search

We present a novel scheme for error-resilient digital video broadcasting,using the Wyner-Ziv coding paradigm. We apply the general framework of systematic lossy source-channel coding to generate a supplementary bitstream that can correct transmission errors in the decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-coded bitstream, which is transmitted over the error-prone

Shantanu D. Rane; Anne Aaron; Bernd Girod

2004-01-01

46

A systematic approach toward error structure identification for impedance spectroscopy  

Microsoft Academic Search

The state-of-the-art is reviewed for the use of measurement models for assessing the stochastic and bias error structure of impedance measurements. The methods are illustrated for published impedance data that contain both capacitive and inductive components. This systematic error analysis demonstrates that, in spite of differences between sequential impedance scans and the appearance of inductive and incomplete capacitive loops, the

Mark E. Orazem

2004-01-01

47

Tackling systematic errors in quantum logic gates with composite rotations  

SciTech Connect

We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation.

Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A. [Centre for Quantum Computation, Clarendon Laboratory, University of Oxford, Parks Road, OX1 3PU (United Kingdom)

2003-04-01

48

Systematic errors in medical decision making  

Microsoft Academic Search

Much of medical practice involves the exercise of such basic cognitive tasks as estimating probabilities and synthesizing\\u000a information. Scientists studying cognitive processes have identified impediments to accurate performance on these tasks. Together\\u000a the impediments foster “cognitive bias.” Five factors that can detract from accurate probability estimation and three that\\u000a impair accurate information synthesis are discussed. Examples of all eight factors

Neal V. Dawson; Hal R. Arkes

1987-01-01

49

Systematic Parameter Errors in Inspiraling Neutron Star Binaries  

NASA Astrophysics Data System (ADS)

The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

Favata, Marc

2014-03-01

50

On the detection of systematic errors in terrestrial laser scanning data  

NASA Astrophysics Data System (ADS)

Quality descriptions are parts of the key tasks of geodetic data processing. Systematic errors should be detected and avoided in order to insure the high quality standards required by structural monitoring. In this study, the iterative closest point (ICP) method was invested to detect systematic errors in two overlapping data sets. There are three steps to process the systematic errors: firstly, one of the data sets was transformed to a reference system by the introduction of the Gauss-Helmert (GH) model. Secondly, quadratic form estimation and segmentation methods are proposed to guarantee the overlapping data sets. Thirdly, the ICP method was employed for a finer registration and detecting the systematic errors. A case study was casted in which a dam surface in Germany was scanned by terrestrial laser scanning (TLS) technology. The results indicated that with the conjugation of ICP algorithm the accuracy of the data sets was improved approximately by 1.6 mm.

Wang, Jin; Kutterer, Hansjoerg; Fang, Xing

2012-11-01

51

Ultrasonic error compensation method to correct instrumental and systematic errors in volumetric flow measurements: a theoretical study.  

PubMed

A computational model has been developed to observe the effects of several instrumental parameters and systematic error sources on ultrasonic volumetric flow measurements in vessels using an error compensation method proposed previously by the authors. The effects of two essential instrumental parameters, the gate length and pulse duration, on two intermediate flow results and an optimal correction factor as a function of the beam width-to-vessel diameter ratio (BWR) have been studied under steady flow conditions. The corrected flow results under typical combinations of those parameters show that by proper selection of a correction factor, the proposed method can compensate for those systematic errors. If the estimated BWR is within +/- 0.1 of the actual BWR value, the systematic error can be limited to within 6.5%. The study also shows that the compensation method can effectively reduce the errors caused by other systematic error sources such as the system sensitivity and high-pass filter function, as well as misalignment of the ultrasound beam from the center of the vessel. PMID:12102228

Fei, Ding-Yu; Fu, Cai-Ting; Chen, Xunchang

2002-01-01

52

Estimating Bias Errors in the GPCP Monthly Precipitation Product  

NASA Astrophysics Data System (ADS)

Climatological data records are important to understand global and regional variations and trends. The Global Precipitation Climatology Project (GPCP) record of monthly, globally complete precipitation analyses stretches back to 1979 and is based on a merger of both satellite and surface gauge records. It is a heavily used data record—cited in over 1500 journal papers. It is important that these types of data records also include information about the uncertainty of the presented estimates. Indeed the GPCP monthly analysis already includes estimates of the random error, due to algorithm and sampling random error, associated with each gridded, monthly value (Huffman, 1997). It is also important to include estimates of bias error, i.e., the uncertainty of the monthly value (or climatology) in terms of its absolute value. Results are presented based on a procedure (Adler et al., 2012) to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources and merged products. The GPCP monthly product is used as a base precipitation estimate, with other input products included when they are within ±50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation (?) of the included products is then taken to be the estimated systematic, or bias, error. The results allow us to first examine monthly climatologies and the annual climatology producing maps of estimated bias errors, zonal mean errors and estimated errors over large areas, such as ocean and land for both the tropics and for the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where we should be more or less confident of our mean precipitation estimates. In the tropics, relative bias error estimates (?/?, where ? is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, compared to 10-15% in the western Pacific part of the ITCZ. Examining latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold season errors at high latitudes due to snow. Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, considered to be an upper bound due to lack of sign-of-the-error cancelling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of our current state of knowledge of the planet's mean precipitation. The bias uncertainty procedure is being extended so that it can be applied to individual months (e.g., January 1998) on the GPCP grid (2.5 degree latitude-longitude). Validation of the bias estimates and the monthly random error estimates will also be presented.

Sapiano, M. R.; Adler, R. F.; Gu, G.; Huffman, G. J.

2012-12-01

53

Estimation of coordinate measuring machine error parameters  

Microsoft Academic Search

Positioning accuracies of computer controlled machines such as robots, CNC machines and coordinate measuring machines (CMMs) are of great importance to their operations. An estimation concept for quantifying the error parameters based on error indications from ball-bar testing is described. A preliminary simulation study was performed, and it has been concluded that the concept is feasible and is worth further

J. Chen; Y. F. Chen

1987-01-01

54

Error Estimates for Numerical Integration Rules  

ERIC Educational Resources Information Center

The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

Mercer, Peter R.

2005-01-01

55

Error analysis of global rainfall estimation algorithms  

NASA Astrophysics Data System (ADS)

Methodologies for quantifying the uncertainty of global, monthly rainfall estimates averaged over 2.5o x 2.5o latitude/longitude areas are developed. The rainfall estimation techniques studied here are used in the Global Precipitation Climatology Project (GPCP) to produce global products. The types of data used in the GPCP algorithms are raingage data, infrared satellite data, microwave satellite data over oceans, and microwave satellite data over land. Many uncertainty sources have been identified by algorithm developers and an attempt is made to quantify their contribution to the total estimation error at the scales used by the GPCP. The objective of this thesis is to develop expressions for the bias and random errors of four rainfall estimation algorithms as functions of globally observable variables. In addition, the methodologies developed in this thesis provide a starting point for future error analyses of these rainfall estimation algorithms. Two recently developed error estimation methods for rainfall estimation from raingage data are analyzed to demonstrate the accuracy of application of these methods to the resolution of the GPCP products (monthly estimates over 2.5o x 2.5o latitude/longitude areas). Multivariate regressions are developed to model the bias of the GOES Precipitation Index (GPI) infrared rainfall estimation algorithm. The predictor variables in these regressions are taken from the data produced by the International Satellite Cloud Climatology Project (ISCCP). A Monte Carlo simulation study is performed to quantify the bias and random errors of the microwave emission algorithm used by the GPCP for oceanic rainfall estimates as functions of the estimated rainfall. The errors of the microwave scattering algorithm used by the GPCP to estimate rainfall over land are expressed as functions of time and space. The quantitative error expressions developed in this study provide new information that is consistent with prior knowledge of the error characteristics of the algorithms. These error estimates, when compared with the rainfall estimates from each algorithm, result in a product that will benefit both algorithm developers and users of the rainfall estimates.

McCollum, Jeffrey Richard

56

Systematic model forecast error in Rossby wave structure  

NASA Astrophysics Data System (ADS)

Diabatic processes can alter Rossby wave structure; consequently, errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root-mean-square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead 5 days, consistent with underrepresentation of diabatic modification and transport of air from the lower troposphere into upper tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.

Gray, S. L.; Dunning, C. M.; Methven, J.; Masato, G.; Chagnon, J. M.

2014-04-01

57

Jason-2 systematic error analysis in the GPS derived orbits  

NASA Astrophysics Data System (ADS)

Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced dynamic versus dynamic orbit differences are used to characterize the remaining force model error and TRF instability. At first, we quantify the effect of a North/South displacement of the tracking reference points for each of the three techniques. We then compare these results to the study of Morel and Willis (2005) and Ceri et al. (2010). We extend the analysis to the most recent Jason-2 cycles. We evaluate the GPS vs SLR & DORIS orbits produced using the GEODYN.

Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

2011-12-01

58

Treatment of systematic errors in land data assimilation systems  

NASA Astrophysics Data System (ADS)

Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

Crow, W. T.; Yilmaz, M.

2012-12-01

59

Systematic lossy error protection versus layered coding with unequal error protection  

NASA Astrophysics Data System (ADS)

In this paper we compare two schemes for error-resilient video transmission, viz., systematic lossy error protection, and scalable coding with unequal error protection. In the first scheme, the systematic portion consists of the compressed video signal transmitted without channel coding. For error-resilience, an additional bitstream generated by Wyner-Ziv encoding of the video signal is transmitted. In the event of channel errors, the Wyner-Ziv description is decoded using the error-prone systematic description as side-information. In the second scheme, the video bitstream is partitioned into two or more layers, and each layer is assigned different amounts of parity information depending upon its relative significance. Since the base layer has heavy protection, a certain minimum video quality is guaranteed at the receiver. We compare experimentally, the performance of the competing schemes, for a particular application, i.e., Error-resilient digital video broadcasting. It is shown that systematic lossy error protection ensures graceful degradation of video quality without incurring the loss in rate-distortion performance characteristic of practical layered video coding schemes.

Rane, Shantanu D.; Girod, Bernd

2005-03-01

60

Bayes Error Rate Estimation Using Classifier Ensembles  

NASA Technical Reports Server (NTRS)

The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

Tumer, Kagan; Ghosh, Joydeep

2003-01-01

61

MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND  

SciTech Connect

We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

Zhang Le; Timbie, Peter [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, Paul M.; Wandelt, Benjamin D. [Department of Physics, 1110 W Green Street, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Bunn, Emory F., E-mail: lzhang263@wisc.edu [Physics Department, University of Richmond, Richmond, VA 23173 (United States)

2013-06-01

62

Maximum Likelihood Analysis of Systematic Errors in Interferometric Observations of the Cosmic Microwave Background  

NASA Astrophysics Data System (ADS)

We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0.°7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at ~10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2? upper limit of the tensor-to-scalar ratio r by ~10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

Zhang, Le; Karakci, Ata; Sutter, Paul M.; Bunn, Emory F.; Korotkov, Andrei; Timbie, Peter; Tucker, Gregory S.; Wandelt, Benjamin D.

2013-06-01

63

Errors-in-variables estimation with wavelets  

Microsoft Academic Search

This paper proposes a wavelet (spectral) approach to estimate the parameters of a linear regression model where the regressand and the regressors are persistent processes and contain a measurement error. We propose a wavelet filtering approach which does not require instruments and yields unbiased estimates for the intercept and the slope parameters. Our Monte Carlo results also show that the

Ramazan Gençay; Nikola Gradojevic

2011-01-01

64

Estimating error rates in bioactivity databases.  

PubMed

Bioactivity databases are routinely used in drug discovery to look-up and, using prediction tools, to predict potential targets for small molecules. These databases are typically manually curated from patents and scientific articles. Apart from errors in the source document, the human factor can cause errors during the extraction process. These errors can lead to wrong decisions in the early drug discovery process. In the current work, we have compared bioactivity data from three large databases (ChEMBL, Liceptor, and WOMBAT) who have curated data from the same source documents. As a result, we are able to report error rate estimates for individual activity parameters and individual bioactivity databases. Small molecule structures have the greatest estimated error rate followed by target, activity value, and activity type. This order is also reflected in supplier-specific error rate estimates. The results are also useful in identifying data points for recuration. We hope the results will lead to a more widespread awareness among scientists on the frequencies and types of errors in bioactivity data. PMID:24160896

Tiikkainen, Pekka; Bellis, Louisa; Light, Yvonne; Franke, Lutz

2013-10-28

65

Systematic capacitance matching errors and corrective layout procedures  

Microsoft Academic Search

Precise capacitor ratios are employed in a variety of analog and mixed signal integrated circuits. The use of identical unit capacitors to form larger capacitances can easily produce 1% accuracy, but, in many cases, 0.1% accuracy can provide important performance advantages. Unfortunately, the ultimate matching precision of the ratio is limited by a number of systematic and random error sources.

M. J. McNutt; S. LeMarquis; J. L. Dunkley

1994-01-01

66

A Systematic Error in Tests of Ideal Free Theory  

Microsoft Academic Search

Classical ideal free theory predicts that the distribution of consumers within a patchy environment should correspond to the distribution of resources. Tests of this prediction have inappropriately compared ratios of mean resource levels and mean consumer densities, rather than means of ratios. We show that this error, which has propagated through hundreds of studies, leads to a systematic bias: the

David J. D. Earn; Rufus A. Johnstone

1997-01-01

67

Seeing Your Error Alters My Pointing: Observing Systematic Pointing Errors Induces Sensori-Motor After-Effects  

PubMed Central

During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farne, Alessandro

2011-01-01

68

The Effect of Systematic Error in Forced Oscillation Testing  

NASA Technical Reports Server (NTRS)

One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

2012-01-01

69

Weak gravitational lensing systematic errors in the dark energy survey  

NASA Astrophysics Data System (ADS)

Dark energy is one of the most important unsolved problems in modern Physics, and weak gravitational lensing (WL) by mass structures along the line of sight ("cosmic shear") is a promising technique to learn more about its nature. However, WL is subject to numerous systematic errors which induce biases in measured cosmological parameters and prevent the development of its full potential. In this thesis, we advance the understanding of WL systematics in the context of the Dark Energy Survey (DES). We develop a testing suite to assess the performance of the shapelet-based DES WL measurement pipeline. We determine that the measurement bias of the parameters of our Point Spread Function (PSF) model scales as (S/N )-2, implying that a PSF S/N > 75 is needed to satisfy DES requirements. PSF anisotropy suppression also satisfies the requirements for source galaxies with S/N ? 45. For low-noise, marginally-resolved exponential galaxies, the shear calibration errors are up to about 0.06% (for shear values ? 0.075). Galaxies with S/N ? 75 present about 1% errors, sufficient for first-year DES data. However, more work is needed to satisfy full-area DES requirements, especially in the high-noise regime. We then implement tests to validate the high accuracy of the map between pixel coordinates and sky coordinates (astrometric solution), which is crucial to detect the required number of galaxies for WL in stacked images. We also study the effect of atmospheric dispersion on cosmic shear experiments such as DES and the Large Synoptic Survey Telescope (LSST) in the four griz bands. For DES (LSST), we find systematics in the g and r (g, r, and i) bands that are larger than required. We find that a simple linear correction in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r ( i) band for DES (LSST). More complex corrections will likely reduce the systematic cosmic-shear errors below statistical errors for LSST r band. However, g-band dispersion effects remain large enough for induced systematics to dominate the statistical error of both surveys, so cosmic-shear measurements should rely on the redder bands.

Plazas, Andres Alejandro

70

Systematic lossy forward error protection for error-resilient digital video broadcasting  

NASA Astrophysics Data System (ADS)

We present a novel scheme for error-resilient digital video broadcasting,using the Wyner-Ziv coding paradigm. We apply the general framework of systematic lossy source-channel coding to generate a supplementary bitstream that can correct transmission errors in the decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-coded bitstream, which is transmitted over the error-prone channel without forward error correction.The supplementary bitstream is a low rate representation of the transmitted video sequence generated using Wyner-Ziv encoding. We use the conventionally decoded error-concealed MPEG video sequence as side information to decode the Wyner-Ziv bits. The decoder combines the error-prone side information and the Wyner-Ziv description to yield an improved decoded video signal. Our results indicate that, over a large range of channel error probabilities, this scheme yields superior video quality when compared with traditional forward error correction techniques employed in digital video broadcasting.

Rane, Shantanu D.; Aaron, Anne; Girod, Bernd

2004-01-01

71

Investigating Systematic Errors in Iodine Cell Radial Velocity Measurements  

NASA Astrophysics Data System (ADS)

Astronomers have made precise stellar radial velocity measurements using an iodine cell as a calibrator since the 1980s. These measurements have led to the discovery of hundreds of extrasolar planets, and have contributed to the characterization of many more. The precision of these measurements is limited by systematic errors caused primarily by the instability of the spectrographs used to acquire data, and which are not properly modeled in the data analysis process. We present an investigation of ways to mitigate and better model these systematic effects in data analysis. Such an improvement in the radial velocity analysis process would be readily applicable to twenty years worth of radial velocity data.

Vanderburg, Andrew; Marcy, G. W.; Johnson, J. A.

2014-01-01

72

SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION  

SciTech Connect

Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

2012-07-10

73

Reducing systematic errors in measurements made by a SQUID magnetometer  

NASA Astrophysics Data System (ADS)

A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.

Kiss, L. F.; Kaptás, D.; Balogh, J.

2014-11-01

74

Reducing Measurement Error in Student Achievement Estimation  

ERIC Educational Resources Information Center

The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…

Battauz, Michela; Bellio, Ruggero; Gori, Enrico

2008-01-01

75

Estimation of Error Rates in Discriminant Analysis  

Microsoft Academic Search

Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of misclassification for different combinations of sample sizes and different numbers of parameters. The two methods in most common use are found to be significantly poorer than some new methods that are proposed.

Peter A. Lachenbruch; M. Ray Mickey

1968-01-01

76

Error analysis for general multtvariate kernel estimators  

Microsoft Academic Search

Kernel estimators for d dimensional data are usually parametrized by either a single smoothing parameter, or d smoothing parameters corresponding to each of the coordinate directions. A generalization of each of these parameterizations is to use a d× d matrix which allows smoothing in arbitrary directions. We demonstrate that, at this level of generality, the usual error approximations and their

M. P. Wand

1992-01-01

77

Removing Systematic Error in Node Localisation Using Scalable Data Fusion  

Microsoft Academic Search

Methods for node localisation in sensor networks usually rely upon the measurement of received strength, time-of-arrival, and\\/or angle-of-arrival of an incoming signal. In this paper, we propose a method for achieving higher accuracy by combining redundant measurements taken by difierent nodes. This method is aimed at compensating for the systematic errors which are dependent on the speciflc nodes used, as

Albert Krohn; Mike Hazas; Michael Beigl

2007-01-01

78

Robustness of single-qubit geometric gate against systematic error  

SciTech Connect

Universal single-qubit gates are constructed from a basic Bloch rotation operator realized through nonadiabatic Abelian geometric phase. The driving Hamiltonian in a generic two-level model is parameterized using controllable physical variables. The fidelity of the basic geometric rotation operator is investigated in the presence of systematic error in control parameters, such as the driving pulse area and frequency detuning. Compared to a conventional dynamic rotation, the geometric rotation shows improved fidelity.

Thomas, J. T.; Lababidi, Mahmoud; Tian, Mingzhen [School of Physics, Astronomy and Computational Sciences, George Mason University, Fairfax, Virginia 22030 (United States)

2011-10-15

79

Nonnull testing of rotationally symmetric aspheres: a systematic error assessment.  

PubMed

An optical setup for the testing of rotationally symmetric aspheres without a null optic is proposed. The optical setup is able to transfer the strongly curved wave fronts that stem from the reflection of a spherical testing wave front at a rotationally symmetric asphere. By simulation it is proved that the algorithms of the Shack-Hartmann sensor that is used can cope with the steep wave-front slopes (approximately 110lambda/mm) in the detection plane. The systematic errors of the testing configuration are analyzed and separated. For all types of error, functionals are derived whose significance is proved by simulation. The maximum residual errors in the simulations are fewer than lambda/500 (peak to valley). PMID:18357016

Pfund, J; Lindlein, N; Schwider, J

2001-02-01

80

Geodesy by radio interferometry: Effects of atmospheric modeling errors on estimates of baseline length  

Microsoft Academic Search

Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for  8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere (\\

J. L. Davis; T. A. Herrinch; I. I. Shapiro; A. E. E. Rollers; G. Elgered

1985-01-01

81

Systematic Approach for Decommissioning Planning and Estimating  

SciTech Connect

Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises.

Dam, A. S.

2002-02-26

82

A posteriori error estimation techniques in practical finite element analysis  

Microsoft Academic Search

In this paper we review the basic concepts to obtain a posteriori error estimates for the finite element solution of an elliptic linear model problem. We give the basic ideas to establish global error estimates for the energy norm as well as goal-oriented error estimates. While we show how these error estimation techniques are employed for our simple model problem,

Thomas Grätsch; Klaus-Jürgen Bathe

2005-01-01

83

ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES  

SciTech Connect

In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)

2012-05-20

84

Effects of systematic sampling on satellite estimates of deforestation rates  

NASA Astrophysics Data System (ADS)

Options for satellite monitoring of deforestation rates over large areas include the use of sampling. Sampling may reduce the cost of monitoring but is also a source of error in estimates of areas and rates. A common sampling approach is systematic sampling, in which sample units of a constant size are distributed in some regular manner, such as a grid. The proposed approach for the 2010 Forest Resources Assessment (FRA) of the UN Food and Agriculture Organization (FAO) is a systematic sample of 10 km wide squares at every 1° intersection of latitude and longitude. We assessed the outcome of this and other systematic samples for estimating deforestation at national, sub-national and continental levels. The study is based on digital data on deforestation patterns for the five Amazonian countries outside Brazil plus the Brazilian Amazon. We tested these schemes by varying sample-unit size and frequency. We calculated two estimates of sampling error. First we calculated the standard errors, based on the size, variance and covariance of the samples, and from this calculated the 95% confidence intervals (CI). Second, we calculated the actual errors, based on the difference between the sample-based estimates and the estimates from the full-coverage maps. At the continental level, the 1°, 10 km scheme had a CI of 21% and an actual error of 8%. At the national level, this scheme had CIs of 126% for Ecuador and up to 67% for other countries. At this level, increasing sampling density to every 0.25° produced a CI of 32% for Ecuador and CIs of up to 25% for other countries, with only Brazil having a CI of less than 10%. Actual errors were within the limits of the CIs in all but two of the 56 cases. Actual errors were half or less of the CIs in all but eight of these cases. These results indicate that the FRA 2010 should have CIs of smaller than or close to 10% at the continental level. However, systematic sampling at the national level yields large CIs unless the sample size is very large, especially if any sub-national stratification of estimates is required.

Steininger, M. K.; Godoy, F.; Harper, G.

2009-09-01

85

Minimization and estimation of geoid undulation errors  

Microsoft Academic Search

The objective of this paper is to minimize the geoid undulation errors by focusing on the contribution of the global geopotential\\u000a model and regional gravity anomalies, and to estimate the accuracy of the predicted gravimetric geoid.\\u000a \\u000a The geopotential model's contribution is improved by (a) tailoring it using the regional gravity anomalies and (b) introducing\\u000a a weighting function to the geopotential

Ye Cai Li; Michael G. Sideris

1994-01-01

86

Errors from digitizing and noise in estimating attractor dimensions  

Microsoft Academic Search

Two systematic errors in the calculation of attractor dimensions using the Grassberg-Procaccia algorithm are considered. Random noise added to the data causes overestimates. Digitizing (quantizing) the data causes underestimates. Methods for reduction of these errors are presented.

M. Möller; W. Lange; F. Mitschke; N. B. Abraham; U. Hübner

1989-01-01

87

Estimating Prediction Error: Cross-Validation vs. Accumulated Prediction Error  

Microsoft Academic Search

We study the validation of prediction rules such as regression models and classification algorithms through two out-of-sample strategies, cross-validation and accumulated prediction error. We use the framework of Efron (1983) where measures of prediction errors are defined as sample averages of expected errors and show through exact finite sample calculations that cross-validation and accumulated prediction error yield different smoothing parameter

Jenny Häggström; Xavier de Luna

2010-01-01

88

Simulation-Extrapolation Estimation in parametric measurement error models  

Microsoft Academic Search

We describe a simulation-based method of inference for parametric measurement error models in which the measurement error variance is known or at least well estimated. The method entails adding additional measurement error in known increments to the data, computing estimates from the contaminated data, establishing a trend between these estimates and the variance of the added errors, and extrapolating this

J. R. Cook; L. A Stefanski

1994-01-01

89

Estimation of Satellite-Rainfall Error Correlation  

NASA Astrophysics Data System (ADS)

With many satellite rainfall products being available for long periods, it is important to assess and validate the algorithms estimating the rainfall rates for these products. Many studies have been done on evaluating the uncertainty of satellite rainfall products over different parts of the world by comparing them to rain-gauge and/or radar rainfall products. In preparation for the field experiment Iowa Flood Studies, or IFloodS, one of the integrated validation activities of the Global Precipitation Measurement mission, we are evaluating three popular satellite-based products for the IFloodS domain of the upper Midwest in the US. One of the relevant questions is the determination of the covariance (correlation) of rainfall errors in space and time for the domain. Three satellite rainfall products have been used in this study, and a radar rainfall product has been used as a ground reference. The three rainfall products are TRMM's TMPA 3B42 V7, CPC's CMORPH and CHRS at UCI's PERSIANN. All the satellite rainfall products used in this study represent 3 hourly, quarter degree, rainfall accumulation. Our ground reference is NCEP Stage IV radar-rainfall, which is available in an hourly, four kilometers, resolution. We discuss the adequacy of the Stage IV product as a ground reference for evaluating the satellite products. We used our rain gauge network in Iowa to evaluate the performance of the Stage IV data on different spatial and temporal scales. While arguably this adequacy is only marginal, we used the radar products to study the spatial and temporal correlation of the satellite product errors. We studied the behavior of the errors, defined as the difference between the satellite and radar product (with matched space time resolution), during the period from the year 2004 through the year 2010. Our results show that the error behavior of the satellite rainfall products is quite similar. Errors are less correlated during warm seasons and the errors of CMORPH and PERSIANN are more correlated than those of TRMM through the study period. We calculated the correlation distance for the different products and it was approximately 75 km. The results also show that the correlation decays considerably with time lag. Our results have implications for the hydrologic studies using satellite data as the error correlation determines basin scales that effectively can filter out the random errors.

ElSaadani, Mohamed; Krajewski, Witold; Seo, Bong Chul; Goska, Radoslaw

2013-04-01

90

Gap filling strategies and error in estimating annual soil respiration.  

PubMed

Soil respiration (Rsoil ) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap filling of automated records to produce a complete time series. Although many gap filling methodologies have been employed, there is no standardized procedure for producing defensible estimates of annual Rsoil . Here, we test the reliability of nine different gap filling techniques by inserting artificial gaps into 20 automated Rsoil records and comparing gap filling Rsoil estimates of each technique to measured values. We show that although the most commonly used techniques do not, on average, produce large systematic biases, gap filling accuracy may be significantly improved through application of the most reliable methods. All methods performed best at lower gap fractions and had relatively high, systematic errors for simulated survey measurements. Overall, the most accurate technique estimated Rsoil based on the soil temperature dependence of Rsoil by assuming constant temperature sensitivity and linearly interpolating reference respiration (Rsoil at 10 °C) across gaps. The linear interpolation method was the second best-performing method. In contrast, estimating Rsoil based on a single annual Rsoil - Tsoil relationship, which is currently the most commonly used technique, was among the most poorly-performing methods. Thus, our analysis demonstrates that gap filling accuracy may be improved substantially without sacrificing computational simplicity. Improved and standardized techniques for estimation of annual Rsoil will be valuable for understanding the role of Rsoil in the global C cycle. PMID:23504959

Gomez-Casanovas, Nuria; Anderson-Teixeira, Kristina; Zeri, Marcelo; Bernacchi, Carl J; DeLucia, Evan H

2013-06-01

91

Error estimation and structural shape optimization  

NASA Astrophysics Data System (ADS)

This work is concerned with three topics: error estimation, data smoothing process and the structural shape optimization design and analysis. In particular, the superconvergent stress recovery technique, the dual kriging B-spline curve and surface fittings, the development and the implementation of a novel node-based numerical shape optimization package are addressed. Concept and new technique of accurate stress recovery are developed and applied in finding the lateral buckling parameters of plate structures. Some useful conclusions are made for the finite element Reissner-Mindlin plate solutions. The powerful dual kriging B-spline fitting technique is reviewed and a set of new compact formulations are developed. This data smoothing method is then applied in accurately recovering curves and surfaces. The new node-based shape optimization method is based on the consideration that the critical stress and displacement constraints are generally located along or near the structural boundary. The method puts the maximum weights on the selected boundary nodes, referred to as the design points, so that the time-consuming sensitivity analysis is related to the perturbation of only these nodes. The method also allows large shape changes to achieve the optimal shape. The design variables are specified as the moving magnitudes for the prescribed design points that are always located at the structural boundary. Theories, implementations and applications are presented for various modules by which the package is constructed. Especially, techniques involving finite element error estimation, adaptive mesh generation, design sensitivity analysis, and data smoothing are emphasized.

Song, Xiaoguang

92

Improved arrayed-waveguide-grating layout avoiding systematic phase errors.  

PubMed

We present a detailed description of an improved arrayed-waveguide-grating (AWG) layout for both, low and high diffraction orders. The novel layout presents identical bends across the entire array; in this way systematic phase errors arising from different bends that are inherent to conventional AWG designs are completely eliminated. In addition, for high-order AWGs our design results in more than 50% reduction of the occupied area on the wafer. We present an experimental characterization of a low-order device fabricated according to this geometry. The device has a resolution of 5.5 nm, low intrinsic losses (< 2 dB) in the wavelength region of interest for the application, and is polarization insensitive over a wide spectral range of 215 nm. PMID:21643130

Ismail, Nur; Sun, Fei; Sengo, Gabriel; Wörhoff, Kerstin; Driessen, Alfred; de Ridder, René M; Pollnau, Markus

2011-04-25

93

Systematic Review of the Balance Error Scoring System  

PubMed Central

Context: The Balance Error Scoring System (BESS) is commonly used by researchers and clinicians to evaluate balance.A growing number of studies are using the BESS as an outcome measure beyond the scope of its original purpose. Objective: To provide an objective systematic review of the reliability and validity of the BESS. Data Sources: PubMed and CINHAL were searched using Balance Error Scoring System from January 1999 through December 2010. Study Selection: Selection was based on establishment of the reliability and validity of the BESS. Research articles were selected if they established reliability or validity (criterion related or construct) of the BESS, were written in English, and used the BESS as an outcome measure. Abstracts were not considered. Results: Reliability of the total BESS score and individual stances ranged from poor to moderate to good, depending on the type of reliability assessed. The BESS has criterion-related validity with force plate measures; more difficult stances have higher agreement than do easier ones. The BESS is valid to detect balance deficits where large differences exist (concussion or fatigue). It may not be valid when differences are more subtle. Conclusions: Overall, the BESS has moderate to good reliability to assess static balance. Low levels of reliability have been reported by some authors. The BESS correlates with other measures of balance using testing devices. The BESS can detect balance deficits in participants with concussion and fatigue. BESS scores increase with age and with ankle instability and external ankle bracing. BESS scores improve after training.

Bell, David R.; Guskiewicz, Kevin M.; Clark, Micheal A.; Padua, Darin A.

2011-01-01

94

Target parameter and error estimation using magnetometry  

NASA Astrophysics Data System (ADS)

The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.

Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.

95

Using ridge regression in systematic pointing error corrections  

NASA Technical Reports Server (NTRS)

A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

Guiar, C. N.

1988-01-01

96

Quantifying Error in the CMORPH Satellite Precipitation Estimates  

NASA Astrophysics Data System (ADS)

As part of the collaboration between China Meteorological Administration (CMA) National Meteorological Information Centre (NMIC) and NOAA Climate Prediction Center (CPC), a new system is being developed to construct hourly precipitation analysis on a 0.25olat/lon grid over China by merging information derived from gauge observations and CMORPH satellite precipitation estimates. Foundation to the development of the gauge-satellite merging algorithm is the definition of the systematic and random error inherent in the CMORPH satellite precipitation estimates. In this study, we quantify the CMORPH error structures through comparisons against a gauge-based analysis of hourly precipitation derived from station reports from a dense network over China. First, systematic error (bias) of the CMORPH satellite estimates are examined with co-located hourly gauge precipitation analysis over 0.25olat/lon grid boxes with at least one reporting station. The CMORPH exhibits biases of regional variations showing over-estimates over eastern China, and seasonal changes with over-/under-estimates during warm/cold seasons. The CMORPH bias presents range-dependency. In general, the CMORPH tends to over-/under-estimate weak / strong rainfall. The bias, when expressed in the form of ratio between the gauge observations and the CMORPH satellite estimates, increases with the rainfall intensity but tends to saturate at a certain level for high rainfall. Based on the above results, a prototype algorithm is developed to remove the CMORPH bias through matching the PDF of original CMORPH estimates against that of the gauge analysis using data pairs co-located over grid boxes with at least one reporting gauge over a 30-day period ending at the target date. The spatial domain for collecting the co-located data pairs is expanded so that at least 5000 pairs of data are available to ensure statistical availability. The bias-corrected CMORPH is then compared against the gauge data to quantify the remaining random error. The results showed that the random error in the bias-corrected CMORPH is proportional to the smoothness of the target precipitation fields, expressed as the standard deviation of the CMORPH fields, and to the size of the spatial domain over which the data pairs to construct the PDF functions are collected. An empirical equation is then defined to compute the random error in the bias-corrected CMORPH from the CMORPH spatial standard deviation and the size of the data collection domain. An algorithm is being developed to combine the gauge analysis with the bias-corrected CMORPH through the optimal interpolation (OI) technique using the error statistics defined in this study. In this process, the bias-corrected CMORPH will be used as the first guess, while the gauge data will be utilized as observations to modify the first guess over regions with gauge network coverage. Detailed results will be reported at the conference.

Xu, B.; Yoo, S.; Xie, P.

2010-12-01

97

Systematic errors in free energy perturbation calculations due to a finite sample of configuration space: Sample-size hysteresis  

SciTech Connect

Although the free energy perturbation procedure is exact when an infinite sample of configuration space is used, for finite sample size there is a systematic error resulting in hysteresis for forward and backward simulations. The qualitative behavior of this systematic error is first explored for a Gaussian distribution, then a first-order estimate of the error for any distribution is derived. To first order the error depends only on the fluctuations in the sample of potential energies, {Delta}E, and the sample size, n, but not on the magnitude of {Delta}E. The first-order estimate of the systematic sample-size error is used to compare the efficiencies of various computing strategies. It is found that slow-growth, free energy perturbation calculations will always have lower errors from this source than window-growth, free energy perturbation calculations for the same computing effort. The systematic sample-size errors can be entirely eliminated by going to thermodynamic integration rather than free energy perturbation calculations. When {Delta}E is a very smooth function of the coupling parameter, {lambda}, thermodynamic integration with a relatively small number of windows is the recommended procedure because the time required for equilibration is reduced with a small number of windows. These results give a method of estimating this sample-size hysteresis during the course of a slow-growth, free energy perturbation run. This is important because in these calculations time-lag and sample-size errors can cancel, so that separate methods of estimating and correcting for each are needed. When dynamically modified window procedures are used, it is recommended that the estimated sample-size error be kept constant, not that the magnitude of {Delta}E be kept constant. Tests on two systems showed a rather small sample-size hysteresis in slow-growth calculations except in the first stages of creating a particle, where both fluctuations and sample-size hysteresis are large.

Wood, R.H.; Muehlbauer, W.C.F. (Univ. of Delaware, Newark (United States)); Thompson, P.T. (Swarthmore Coll., PA (United States))

1991-08-22

98

A posteriori pointwise error estimates for the boundary element method  

SciTech Connect

This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics

1995-01-01

99

Systematic Biases in Human Heading Estimation  

PubMed Central

Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.

Cuturi, Luigi F.; MacNeilage, Paul R.

2013-01-01

100

Systematic Errors in GNSS Radio Occultation Data - Part 2  

NASA Astrophysics Data System (ADS)

The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows to compute sensitivities to changes in atmospheric composition, where we found that the effect of the CO2 increase is currently almost exactly balanced by the counteracting effect of the concurrent O2 decrease.

Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

2014-05-01

101

Non-Iterative Method for Modeling Systematic Data Errors in Power System Risk Assessment  

Microsoft Academic Search

This paper provides a new framework for modeling uncertainty in the input data for power system risk calculations, and the error bars that this places on the results. Differently from previous work, systematic error in unit availability probabilities is considered as well as random error, and a closed-form expres- sion is supplied for the error bars on the results. This

Chris J. Dent; Janusz. W. Bialek

2011-01-01

102

Non-iterative method for modelling systematic data errors in power system risk assessment  

Microsoft Academic Search

Summary form only given. This paper provides a new framework for modelling uncertainty in the input data for power system risk calculations, and the error bars that this places on the results. Differently from previous work, systematic error in unit availability probabilities is considered as well as random error, and a closed-form expression is supplied for the error bars on

C. Dent; J. Bialek

2010-01-01

103

The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables  

PubMed Central

Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment.

Faver, John C.; Yang, Wei; Merz, Kenneth M.

2012-01-01

104

The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.  

PubMed

Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment. PMID:23413365

Faver, John C; Yang, Wei; Merz, Kenneth M

2012-10-01

105

Systematic lossy forward error protection for error-resilient digital video broadcasting -a wyner-ziv coding approach  

Microsoft Academic Search

We present a practical scheme for error-resilient digital video broad- casting, using the Wyner-Ziv coding paradigm. We apply the gen- eral framework of systematic lossy source-channel coding to gener- ate a supplementary bitstream that can correct transmission errors in the decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-2 bitstream, which is

Shantanu Rane; Anne Aaron; Bernd Girod

2004-01-01

106

Estimating Human Error Rates and Their Effects on System Reliability.  

National Technical Information Service (NTIS)

Human errors often are the major factor in failures of large systems. In the U.S. Nuclear Regulatory Commission's reactor safety study (WASH-1400), the estimated contribution of human errors to unavailability of engineered safety features in nuclear power...

A. D. Swain

1977-01-01

107

Systematic vertical error in UAV-derived topographic models: Origins and solutions  

NASA Astrophysics Data System (ADS)

Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example, images collected during gently banked turns of a fixed-wing UAV or, if camera inclination can be altered, by just a few more oblique images with a rotor-based UAV. We provide practical flight plan solutions that, in the absence of control points, demonstrate a reduction in systematic DEM error by more than two orders of magnitude. DEM generation is subject to this effect whether a traditional photogrammetry or newer structure-from-motion (SfM) processing approach is used, but errors will be typically more pronounced in SfM-based DEMs, for which use of control measurements is often more limited. Although focussed on UAV surveying, our results are also relevant to ground-based image capture for SfM-based modelling.

James, Mike R.; Robson, Stuart

2014-05-01

108

CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes  

NASA Technical Reports Server (NTRS)

Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

2012-01-01

109

Estimating IMU heading error from SAR images.  

SciTech Connect

Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

Doerry, Armin Walter

2009-03-01

110

Iraq War mortality estimates: A systematic review  

PubMed Central

Background In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review the studies, reports and counts on Iraqi deaths since the start of the war and assessed their methodological quality and results. Methods We performed a systematic search of 15 electronic databases from inception to January 2008. In addition, we conducted a non-structured search of 3 other databases, reviewed study reference lists and contacted subject matter experts. We included studies that provided estimates of Iraqi deaths based on primary research over a reported period of time since the invasion. We excluded studies that summarized mortality estimates and combined non-fatal injuries and also studies of specific sub-populations, e.g. under-5 mortality. We calculated crude and cause-specific mortality rates attributable to violence and average deaths per day for each study, where not already provided. Results Thirteen studies met the eligibility criteria. The studies used a wide range of methodologies, varying from sentinel-data collection to population-based surveys. Studies assessed as the highest quality, those using population-based methods, yielded the highest estimates. Average deaths per day ranged from 48 to 759. The cause-specific mortality rates attributable to violence ranged from 0.64 to 10.25 per 1,000 per year. Conclusion Our review indicates that, despite varying estimates, the mortality burden of the war and its sequelae on Iraq is large. The use of established epidemiological methods is rare. This review illustrates the pressing need to promote sound epidemiologic approaches to determining mortality estimates and to establish guidelines for policy-makers, the media and the public on how to interpret these estimates.

Tapp, Christine; Burkle, Frederick M; Wilson, Kumanan; Takaro, Tim; Guyatt, Gordon H; Amad, Hani; Mills, Edward J

2008-01-01

111

Estimation of Nonlinear Errors-in-Variables Models  

Microsoft Academic Search

An estimation procedure is presented for the coefficients of the nonlinear functional relation, where observations are subject to measurement error. The distributional properties of the estimators are derived, and a consistent estimator of the covariance matrix is given. In deriving the results it is assumed that the covariance matrix of the observational errors is known and that this covariance matrix

Kirk M. Wolter; Wayne A. Fuller

1982-01-01

112

Nonlinear errors in variables Estimation of some Engel curves  

Microsoft Academic Search

The most common solution to the errors in variables problem for the linear regression model is the use of instrumental variable estimation. However, this methodology cannot be applied in the nonlinear regression framework. In this paper we develop consistent estimators for nonlinear regression specifications when errors in variables are present. We apply our methodology to estimation of Engel curves on

J. A. Hausman; W. K. Newey; J. L. Powell

1995-01-01

113

Estimating Errors in Least-Squares Fitting  

Microsoft Academic Search

While least-squares fltting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper as- sessment of errors resulting from such flts has received relatively little attention. The present work considers statistical errors in the fltted parameters, as well as in the values of the fltted function itself, resulting from random

P. H. Richter

1995-01-01

114

Improving SMOS retrieved salinity: characterization of systematic errors in reconstructed and modelled brightness temperature images  

NASA Astrophysics Data System (ADS)

The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp boundaries such as the Sky-Earth boundary. Data acquired over the Ocean rather than over Land are prefered to characterize such errors because the variability of the emissivity sensed over the oceanic domain is an order of magnitude smaller than over land. Nevertheless, characterizing such errors over the Ocean is not a trivial task. Even if the natural variability is small, it is larger than the errors to be characterized and the characterization strategy must account for it otherwise the estimated patterns will unfortunately vary significantly with the selected dataset. The communication will present results on a systematic error characterization methodology allowing stable error pattern estimates. Particular focus will be given to the critical data selection strategy and the analysis of the X- and Y-pol patterns obtained over a wide range of SMOS subdatasets. Impact of some image reconstruction options will be evaluated. It will be shown how the methodology is also an interesting tool to diagnose specific error sources. Criticality of accurate description of Faraday rotation effects will be evidenced and latest results about the possibility to infer such information from full Stokes vector will be presented.

Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.

2012-04-01

115

Amplitude and gain error influence on time error estimation algorithm for time interleaved A\\/D converter system  

Microsoft Academic Search

A method for blind estimation of static time errors in time interleaved A\\/D converters is investigated. The method assumes that amplitude and gain errors are removed before the time error estimation. Even if the amplitude and gain errors are estimated and removed, there will be small errors left. In this paper, we investigate how the amplitude and gain errors influence

J. Elbornsson; F. Gustafsson; J.-E. Eklund

2002-01-01

116

Drug Administration Errors in Hospital Inpatients: A Systematic Review  

PubMed Central

Context Drug administration in the hospital setting is the last barrier before a possible error reaches the patient. Objectives We aimed to analyze the prevalence and nature of administration error rate detected by the observation method. Data Sources Embase, MEDLINE, Cochrane Library from 1966 to December 2011 and reference lists of included studies. Study Selection Observational studies, cross-sectional studies, before-and-after studies, and randomized controlled trials that measured the rate of administration errors in inpatients were included. Data Extraction Two reviewers (senior pharmacists) independently identified studies for inclusion. One reviewer extracted the data; the second reviewer checked the data. The main outcome was the error rate calculated as being the number of errors without wrong time errors divided by the Total Opportunity for Errors (TOE, sum of the total number of doses ordered plus the unordered doses given), and multiplied by 100. For studies that reported it, clinical impact was reclassified into four categories from fatal to minor or no impact. Due to a large heterogeneity, results were expressed as median values (interquartile range, IQR), according to their study design. Results Among 2088 studies, a total of 52 reported TOE. Most of the studies were cross-sectional studies (N=46). The median error rate without wrong time errors for the cross-sectional studies using TOE was 10.5% [IQR: 7.3%-21.7%]. No fatal error was observed and most errors were classified as minor in the 18 studies in which clinical impact was analyzed. We did not find any evidence of publication bias. Conclusions Administration errors are frequent among inpatients. The median error rate without wrong time errors for the cross-sectional studies using TOE was about 10%. A standardization of administration error rate using the same denominator (TOE), numerator and types of errors is essential for further publications.

Berdot, Sarah; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre; Sabatier, Brigitte

2013-01-01

117

Reduction of systematic errors by empirical model correction: impact on seasonal prediction skill  

Microsoft Academic Search

Recent studies indicate that the atmospheric response to anomalies in the lower boundary conditions, e.g. sea surface temperatures, is strongly dependent on the atmospheric background flow. Since all general circulation models have long-term systematic errors it is therefore possible that the skill in seasonal prediction is improved by reducing the systematic errors of the model. In this study sensitivity experiments

A. Guldberg; E. Kaas; M. Déqué; S. Yang; S. Vester Thorsen

2005-01-01

118

Systematic error analysis and modeling of MEMS close-loop capacitive accelerometer  

Microsoft Academic Search

This paper proposes a systematic error model for two type of structures of MEMS close-loop capacitive accelerometers, targeted to be used in micro inertial measurement unit (MIMU). The proposed model for the accelerometer's systematic errors includes common physical parameters used to rate an accelerometer: bias, scale factor, cross-axis sensitivity and misalignment. Dynamic analysis based on the basic structure of accelerometer

Gang Dai; Hongming Yu; Wei Su; Beibei Shao; Mei Li

2010-01-01

119

Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities  

SciTech Connect

Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

Auflick, Jack L.

1999-04-21

120

A posteriori compensation of the systematic error due to polynomial interpolation in digital image correlation  

NASA Astrophysics Data System (ADS)

It is well known that displacement components estimated using digital image correlation are affected by a systematic error due to the polynomial interpolation required by the numerical algorithm. The magnitude of bias depends on the characteristics of the speckle pattern (i.e., the frequency content of the image), on the fractional part of displacements and on the type of polynomial used for intensity interpolation. In literature, B-Spline polynomials are pointed out as being able to introduce the smaller errors, whereas bilinear and cubic interpolants generally give the worst results. However, the small bias of B-Spline polynomials is partially counterbalanced by a somewhat larger execution time. We will try to improve the accuracy of lower order polynomials by a posteriori correcting their results so as to obtain a faster and more accurate analysis.

Baldi, Antonio; Bertolino, Filippo

2013-10-01

121

Estimating Standard Errors for School PAAC's in Generalizability Theory.  

ERIC Educational Resources Information Center

School test performance is commonly summarized in terms of the percentage of students at or above a cut score (PAAC) that has been set on a test. Two approaches to estimating the standard errors for school PAACs were examined in this study: conditional standard errors and overall standard errors. The tests used were English language arts and…

Lee, Guemin; Fitzpatrick, Anne R.; Ito, Kyoko

122

Strong Consistency of Error Probability Estimates in NN Discrimination.  

National Technical Information Service (NTIS)

Since 1971, T.J. Wagner introduced an error probability estimate in NN (Nearest Neighbor) discrimination rule and proved the weak convergency of this estimate. The problem of strong consistency is not yet solved. This paper shows the exponential convergen...

Z. D. Bai

1984-01-01

123

An analysis of the least-squares problem for the DSN systematic pointing error model  

NASA Technical Reports Server (NTRS)

A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

Alvarez, L. S.

1991-01-01

124

Deconvolution Estimation in Measurement Error Models: The R Package decon  

PubMed Central

Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples.

Wang, Xiao-Feng; Wang, Bin

2011-01-01

125

Semiclassical Dynamicswith Exponentially Small Error Estimates  

NASA Astrophysics Data System (ADS)

We construct approximate solutions to the time-dependent Schrödingerequation for small values of ?. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and ?>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', ?'>0, and ? > 0.

Hagedorn, George A.; Joye, Alain

126

A posteriori error estimation for generalized finite element methods  

Microsoft Academic Search

In this paper we address the problem of a posteriori error estimation for generalized finite element methods based on the partition of unity method. The computational results focus on the question of the reliability of the error estimators and its assessment.

Theofanis Strouboulis; Lin Zhang; Delin Wang; Ivo Babuška

2006-01-01

127

A posteriori error estimation in finite element analysis  

Microsoft Academic Search

This monograph presents a summary account of the subject of a posteriori error estimation for finite element approximations of problems in mechanics. The study primarily focuses on methods for linear elliptic boundary value problems. However, error estimation for unsymmetrical systems, nonlinear problems, including the Navier-Stokes equations, and indefinite problems, such as represented by the Stokes problem are included. The main

Mark Ainsworth; J. Tinsley Oden

1997-01-01

128

Error Estimation for Reduced-Order Models of Dynamical Systems  

Microsoft Academic Search

The use of reduced-order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors by a combination

Chris Homescu; Linda R. Petzold; Radu Serban

2007-01-01

129

A posteriori error estimates for mixed FEM in elasticity  

Microsoft Academic Search

.   A residue based reliable and efficient error estimator is established for finite element solutions of mixed boundary value\\u000a problems in linear, planar elasticity. The proof of the reliability of the estimator is based on Helmholtz type decompositions\\u000a of the error in the stress variable and a duality argument for the error in the displacements. The efficiency follows from\\u000a inverse

Carsten Carstensen; Georg Dolzmann

1998-01-01

130

A Review of Error Estimation in Adaptive Quadrature  

Microsoft Academic Search

The most critical component of any adaptive numerical quadrature routine is\\u000athe estimation of the integration error. Since the publication of the first\\u000aalgorithms in the 1960s, many error estimation schemes have been presented,\\u000aevaluated and discussed. This paper presents a review of existing error\\u000aestimation techniques and discusses their differences and their common\\u000afeatures. Some common shortcomings of these

Pedro Gonnet

2010-01-01

131

Estimating errors in least-squares fitting  

NASA Technical Reports Server (NTRS)

While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

Richter, P. H.

1995-01-01

132

Error Estimates for GRACE Data Assimilation Into Ocean Models  

NASA Astrophysics Data System (ADS)

Using GRACE data to constrain ocean general circulation models requires quantitative estimates of the errors in GRACE estimates of ocean bottom pressure. We attempt a spatial mapping of these errors by comparing several GRACE data products and bottom pressure simulations from an ocean model. Uncertainties in the spatial mean and in the regional mass anomalies about that mean are considered separately. The resultant error maps, when zonally-averaged, are comparable to the calibrated errors provided by the GRACE processing centers. Noticeable differences are the larger errors at high latitudes and near continental regions with high amplitude hydrological seasonal cycles. The error estimates also depend on which de-aliasing background model is used when processing the GRACE data. Implications for ocean modeling and data assimilation are discussed.

Quinn, K. J.; Ponte, R. M.

2007-12-01

133

Soft Error Hardened Latch and Its Estimation Method  

NASA Astrophysics Data System (ADS)

We propose soft error robust latches which have multi storage nodes and present their efficiencies. The key technology of the latch is a feedback loop circuit with a data node and four gates. We also discuss a method of soft error estimation in robust circuits in this paper. The soft error immunity of this feedback loop circuit is estimated by circuit simulations with two models. The soft error immunity of the latch is estimated by device simulation more accurately. By these precise simulations, the latch is proven to be highly tolerant to soft errors. In addition, the latch protects from not only retention data upset but also transient noise releasing. The latch provides high immunity against all soft error problems with a simple circuit. It is easy to apply the latch technique to various latches, such as single latches, scan latches, and flip-flops.

Uemura, Taiki; Tanabe, Ryo; Tosaka, Yoshiharu; Satoh, Shigeo

2008-04-01

134

Quantifications of error propagation in slope-based wavefront estimations.  

PubMed

We discuss error propagation in the slope-based and the difference-based wavefront estimations. The error propagation coefficient can be expressed as a function of the eigenvalues of the wavefront-estimation-related matrices, and we establish such functions for each of the basic geometries with the serial numbering scheme with which a square sampling grid array is sequentially indexed row by row. We first show that for the wavefront estimation with the wavefront piston value determined, the odd-number grid sizes yield better error propagators than the even-number grid sizes for all geometries. We further show that for both slope-based and difference-based wavefront estimations, the Southwell geometry offers the best error propagators with the minimum-norm least-squares solutions. Noll's theoretical result, which was extensively used as a reference in the previous literature for error propagation estimates, corresponds to the Southwell geometry with an odd-number grid size. Typically the Fried geometry is not preferred in slope-based optical testing because it either allows subsize wavefront estimations within the testing domain or yields a two-rank deficient estimations matrix, which usually suffers from high error propagation and the waffle mode problem. The Southwell geometry, with an odd-number grid size if a zero point is assigned for the wavefront, is usually recommended in optical testing because it provides the lowest-error propagation for both slope-based and difference-based wavefront estimations. PMID:16985547

Zou, Weiyao; Rolland, Jannick P

2006-10-01

135

Quantifications of error propagation in slope-based wavefront estimations  

NASA Astrophysics Data System (ADS)

We discuss error propagation in the slope-based and the difference-based wavefront estimations. The error propagation coefficient can be expressed as a function of the eigenvalues of the wavefront-estimation-related matrices, and we establish such functions for each of the basic geometries with the serial numbering scheme with which a square sampling grid array is sequentially indexed row by row. We first show that for the wavefront estimation with the wavefront piston value determined, the odd-number grid sizes yield better error propagators than the even-number grid sizes for all geometries. We further show that for both slope-based and difference-based wavefront estimations, the Southwell geometry offers the best error propagators with the minimum-norm least-squares solutions. Noll's theoretical result, which was extensively used as a reference in the previous literature for error propagation estimates, corresponds to the Southwell geometry with an odd-number grid size. Typically the Fried geometry is not preferred in slope-based optical testing because it either allows subsize wavefront estimations within the testing domain or yields a two-rank deficient estimations matrix, which usually suffers from high error propagation and the waffle mode problem. The Southwell geometry, with an odd-number grid size if a zero point is assigned for the wavefront, is usually recommended in optical testing because it provides the lowest-error propagation for both slope-based and difference-based wavefront estimations.

Zou, Weiyao; Rolland, Jannick P.

2006-10-01

136

Results and Error Estimates from GRACE Forward Modeling over Antarctica  

NASA Astrophysics Data System (ADS)

Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

Bonin, Jennifer; Chambers, Don

2013-04-01

137

Exact error-rate analysis of diversity 16-QAM with channel estimation error  

Microsoft Academic Search

The bit-error rate (BER) performance of multilevel quadrature amplitude modulation with pilot-symbol-assisted modulation channel estimation in static and Rayleigh fading channels is derived, both for single branch reception and maximal ratio combining diversity receiver systems. The effects of noise and estimator decorrelation on the received BER are examined. The high sensitivity of diversity systems to channel estimation error is investigated

Lingzhi Cao; Norman C. Beaulieu

2004-01-01

138

Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating.  

PubMed

The systematic error for photomechanic methods caused by self-heating induced image expansion when using a digital camera was systematically studied, and a new physical model to explain the mechanism has been proposed and verified. The experimental results showed that the thermal expansion of the camera outer case and lens mount, instead of mechanical components within the camera, were the main reason for image expansion. The corresponding systematic error for both image analysis and fringe analysis based photomechanic methods were analyzed and measured, then error compensation techniques were proposed and verified. PMID:23546150

Ma, Qinwei; Ma, Shaopeng

2013-03-25

139

Error Estimates on the Retrieval of Precipitation using Satellite Passive Microwave Measurements  

NASA Astrophysics Data System (ADS)

Quantitative error estimates on the rainfall retrieval is very important to develop an improved rainfall algorithm that is robust with respect to many uncertainty sources. However, due to the different retrieval frameworks between oceanic and land rainfall algorithms, physically-based and empirically-based respectively, error estimates over two different regimes (ocean and land) need to be implemented separately. We mainly focused on quantifying the uncertainties of oceanic rainfall retrieval. Error models that estimate rainfall uncertainties were developed based on a radiative transfer model (Wilheit et al., 1977). To construct error models, drop size distribution uncertainty, beam filling error, calibration uncertainty and instruments noise were considered as primary uncertainty sources. Since error has two components (random and coupled), we partitioned errors as non-systematic (uncorrelated) and systematic (correlated) parts and treated them differently. For a given pixel, net rain rate uncertainties associated with the major error sources were approximated. Pixel-by-pixel basis rainfall uncertainties were estimated using TMI and AMSR-E data. Relative contributions of each uncertainty as a function of rain rate were computed and the impact of those uncertainties to the channels of TMI and AMSR-E were examined. Results show that data calibration uncertainty dominates at low rain rates, resulting in highest impact on the high frequency (37GHz). On the other hand, variability of drop size distribution is a primary error source on heavy rainfall and thus this uncertainty gives a highest impact on the low frequency (10GHz). In addition, we investigated error characteristics and major uncertainty sources for the land precipitation. Particularly, the effectiveness of rain/no rain screening method which is being used in the current operational algorithm (NASA level 2 AMSR-E rainfall algorithm) were assessed.

Jin, K.; Hong, S.; Weitz, R.; Wilheit, T.

2006-05-01

140

Nonparametric error estimation techniques applied to MSTAR data sets  

NASA Astrophysics Data System (ADS)

The development of ATR performance characterization tools is very important for the design, evaluation and optimization of ATR systems. One possible approach for characterizing ATR performance is to develop measures of the degree of separability of the different target classes based on the available multi-dimensional image measurements. One such measure is the Bayes error which is the minimum probability of misclassification. Bayes error estimates have previously been obtained using Parzen window techniques on real aperture, high range resolution, radar data sets and on simulated synthetic aperture radar (SAR) images. This report extends these results to real MSTAR SAR data. Our results how that the Parzen window technique is a good method for estimating the Bayes error for such large dimensional data sets. However, in order to apply non-parametric error estimation techniques, feature reduction is needed. A discussion of the relationship between feature reduction and non-parametric estimation is included in this paper. The results of multimodal Parzen estimation on MSTAR images are also described. The tools used to produce the Bayes error estimates have been modified to produce Neyman-Pearson criterion estimates as well. Receiver Operating Characteristic curves are presented to illustrate non- parametric Neyman-Pearson error estimation on MSTAR images.

Mehra, Raman K.; Huff, Melvyn; Ravichandran, Ravi B.; Williams, Arnold C.

1998-09-01

141

ANALYSIS AND RECOVERY OF SYSTEMATIC ERRORS IN AIRBORNE LASER SYSTEM  

Microsoft Academic Search

Although some mature manufactures of airborne laser system (ALS) have been published for some years, however, in china, the development of ALS just is on the starting step. Shanghai Institute of Technical Physics (SITP), CAS is developing a new airborne laser instrument. It is the best difficult task to determining the systematic biases of ALS. The ultimate goal is to

Zhihe Wang; Rong Shu; Weiming Xu; Hongyi Pu; Bo Yao

142

Constant Altitude Flight Survey Method for Mapping Atmospheric Ambient Pressures and Systematic Radar Errors.  

National Technical Information Service (NTIS)

The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air d...

T. J. Larson L. J. Ehernberger

1985-01-01

143

Bootstrap Estimates of Standard Errors in Generalizability Theory  

ERIC Educational Resources Information Center

Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

Tong, Ye; Brennan, Robert L.

2007-01-01

144

Continuous and discrete state estimation with error covariance assignment  

Microsoft Academic Search

A novel method of state estimator design is presented for linear continuous and discrete-time systems with white noise inputs. This method provides a closed-form solution for directly assigning the steady-state estimation error covariance. The assignability conditions are interpreted using system theoretical concepts. A robustness property of such estimators is also pointed out

E. Yaz; R. E. Skelton

1991-01-01

145

Analysis of possible systematic errors in the Oslo method  

SciTech Connect

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K. [Department of Physics, University of Oslo, N-0316 Oslo (Norway); Krticka, M. [Institute of Particle and Nuclear Physics, Charles University, Prague (Czech Republic); Betak, E. [Institute of Physics SAS, 84511 Bratislava (Slovakia); Faculty of Philosophy and Science, Silesian University, 74601 Opava (Czech Republic); Schiller, A.; Voinov, A. V. [Department of Physics and Astronomy, Ohio University, Athens, Ohio 45701 (United States)

2011-03-15

146

Numerical investigations on global error estimation for ordinary differential equations  

Microsoft Academic Search

Four techniques of global error estimation, which are Richardson extrapolation (RS), Zadunaisky's technique (ZD), Solving for the Correction (SC) and Integration of Principal Error Equation (IPEE) have been compared in different integration codes (DOPRI5, DVODE, DSTEP). Theoretical aspects concerning their implementations and their orders are first given. Second, a comparison of them based on a large number of tests is

René Aïd; Laurent Levacher

1997-01-01

147

Error Estimates for Generalized Barycentric Interpolation  

PubMed Central

We prove the optimal convergence estimate for first order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions.

Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

2011-01-01

148

Using doppler radar images to estimate aircraft navigational heading error  

DOEpatents

A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

2012-07-03

149

Bit error rate estimation for channels with memory  

Microsoft Academic Search

A method for deriving key parameters of a bit-error-rate (BER) estimate is presented. The method is based on a bursty channel model proposed by B.D. Fritchman (IEEE Trans. Inform. Theory, Vol. IT-13, pp.221-227, Apr. 1967). This model is used to derive the confidence levels and confidence intervals of an error-rate estimate. The theoretical results have been confirmed experimentally by using

M. D. Knowles; A. I. Drukarev

1988-01-01

150

Frequency response and systematic errors in STI measurements  

NASA Astrophysics Data System (ADS)

The paper examines two fundamental problems when measuring and verifying the STI of sound systems. These relate to the overall accuracy of the measurement platform in use and the fact that STI does not correctly account for system frequency response and spectral aberrations under certain conditions. As has previously been reported by the author, STI appears to significantly overestimate the intelligibility of limited bandwith sound systems under reverberant, high S/N ratio conditions. Further evidence and word score comparisons for a number of different frequency response conditions, measured under several reverberant situations are presented. The results indicate that errors of up to 30 comparisons with low-reverberant field and anechoic conditions are also made. It is also shown that the measurement platform itself and sound source characteristics can affect the resultant error. Tests, carried out under highly controlled conditions, are reported that enable the causes of the potential errors to be established. The implications of reported limitations in STI and STIPa, when assessing the potential intelligibility of a sound system or acoustics of a space, are discussed.

Mapp, Peter

2003-10-01

151

Stress Recovery and Error Estimation for 3-D Shell Structures  

NASA Technical Reports Server (NTRS)

The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

Riggs, H. R.

2000-01-01

152

Systematic Errors in Primary Acoustic Thermometry in the Range 2-20 K  

Microsoft Academic Search

Following a brief review of the fundamental principles of acoustic thermometry in the range 2-20 K, its systematic errors are analysed in depth. It is argued that the ultrasonic technique suffers from certain sources of error which are virtually impossible to assess quantitatively except on the basis of certain conjectures about the excitation of the thermometer's resonant cavity. These are

A. R. Colclough

1973-01-01

153

Interim Report on Systematic Errors in Nuclear Power Plants; Seismic Safety Margins Research Program.  

National Technical Information Service (NTIS)

This report is an investigation of systematic errors in nuclear power plants. A review of the Licensee Event Reports (LERs) for Zion-1 and 2 (1974-April 1979) and a study of 100 design errors compiled by the Oak Ridge National Laboratory (ORNL) for the pe...

P. Moieni G. Apostolakis G. E. Cummings

1980-01-01

154

Error estimates, error bounds, and adaptive refinement in finite-element quantum theory  

SciTech Connect

We derive a rigorous {ital a} {ital posteriori} bound on the error in energy eigenvalues obtained using {ital C}{sup 1} finite elements, as well as several {ital a} {ital posteriori} error estimates, and test them numerically with potentials of analytically-known eigenspectrum. We also obtain numerical solutions, with error bounds and estimates, for the octic oscillator potential, illustrating the ability of the finite-element method to resolve-nearly degenerate states with extremely narrow splitting. The incorporation of adaptive refinement is shown to reduce the number of degrees of freedom needed to achieve a given level of accuracy.

Fanchiotti, S. (Physics Department, New York University, New York, New York 10003 (USA)); Rubin, M.A. (Physics Department, Rockefeller University, New York, New York 10021 (USA))

1991-05-15

155

A multi-year methane inversion using SCIAMACHY, accounting for systematic errors using TCCON measurements  

NASA Astrophysics Data System (ADS)

This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.

Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.

2014-04-01

156

Stability and error estimation for Component Adaptive Grid methods  

NASA Technical Reports Server (NTRS)

Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

Oliger, Joseph; Zhu, Xiaolei

1994-01-01

157

PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG  

SciTech Connect

The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

Mighell, Kenneth J. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Plavchan, Peter [NASA Exoplanet Science Institute, California Institute of Technology, Pasadena, CA 91125 (United States)

2013-06-15

158

Assessment of systematic errors in the surface gravity anomalies over North America using the GRACE gravity model  

NASA Astrophysics Data System (ADS)

The surface gravity data collected via traditional techniques such as ground-based, shipboard and airborne gravimetry describe precisely the local gravity field, but they are often biased by systematic errors. On the other hand, the spherical harmonic gravity models determined from satellite missions, in particular, recent models from CHAMP and GRACE, homogenously and accurately describe the low-degree components of the Earth's gravity field. However, they are subject to large omission errors. The surface and satellite gravity data are therefore complementary in terms of spectral composition. In this paper, we aim to assess the systematic errors of low spherical harmonic degrees in the surface gravity anomalies over North America using a GRACE gravity model. A prerequisite is the extraction of the low-degree components from the surface data to make them compatible with GRACE data. Three types of methods are tested using synthetic data: low-pass filtering, the inverse Stokes integral, and spherical harmonic analysis. The results demonstrate that the spherical harmonic analysis works best. Eighty-five per cent of difference between the synthetic gravity anomalies generated from EGM96 and GGM02S from degrees 2 to 90 can be modelled for a region covering North America and neighbouring areas. Assuming EGM96 is developed solely from the surface gravity data with the same accuracy and GGM02S errorless, one way to understand the 85 per cent difference is that it represents the systematic error from the region of study, while the remaining 15 per cent originates from the data outside of the region. To estimate systematic errors in the surface gravity data, Helmert gravity anomalies are generated from both surface and GRACE data on the geoid. Their differences are expanded into surface spherical harmonics. The results show that the systematic errors for degrees 2 to 90 range from about -6 to 13 mGal with a RMS value of 1.4 mGal over North America. A few significant data gaps can be identified from the resulting error map. The errors over oceans appear to be related to the sea surface topography. These systematic errors must be taken into consideration when the surface gravity data are used to validate future satellite gravity missions.

Huang, J.; Véronneau, M.; Mainville, A.

2008-10-01

159

An Empirical State Error Covariance Matrix for Batch State Estimation  

NASA Technical Reports Server (NTRS)

State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

Frisbee, Joseph H., Jr.

2011-01-01

160

COBE Differential Microwave Radiometers - Preliminary systematic error analysis  

NASA Technical Reports Server (NTRS)

The techniques available for the identification and subtraction of sources of dynamic uncertainty from data of the Differential Microwave Radiometer (DMR) instrument aboard COBE are discussed. Preliminary limits on the magnitude in the DMR 1 yr maps are presented. Residual uncertainties in the best DMR sky maps, after correcting the raw data for systematic effects, are less than 6 micro-K for the pixel rms variation, less than 3 micro-K for the rms quadruple amplitude of a spherical harmonic expansion, and less than 30 micro-(K-squared) for the correlation function.

Kogut, A.; Smoot, G. F.; Bennett, C. L.; Wright, E. L.; Aymon, J.; De Amici, G.; Hinshaw, G.; Jackson, P. D.; Kaita, E.; Keegstra, P.

1992-01-01

161

CMB beam systematics: Impact on lensing parameter estimation  

NASA Astrophysics Data System (ADS)

The cosmic microwave background (CMB) is a rich source of cosmological information. Thanks to the simplicity and linearity of the theory of cosmological perturbations, observations of the CMB’s polarization and temperature anisotropy can reveal the parameters that describe the contents, structure, and evolution of the cosmos. Temperature anisotropy is necessary but not sufficient to fully mine the CMB of its cosmological information as it is plagued with various parameter degeneracies. Fortunately, CMB polarization breaks many of these degeneracies and adds new information and increased precision. Of particular interest is the CMB’s B-mode polarization, which provides a handle on several cosmological parameters most notably the tensor-to-scalar ratio r and is sensitive to parameters that govern the growth of large-scale structure and evolution of the gravitational potential. These imprint CMB temperature anisotropy and cause E-to-B-mode polarization conversion via gravitational lensing. However, both primordial gravitational-wave- and secondary lensing-induced B-mode signals are very weak and therefore prone to various foregrounds and systematics. In this work we use Fisher-matrix-based estimations and apply, for the first time, Monte Carlo Markov Chain simulations to determine the effect of beam systematics on the inferred cosmological parameters from five upcoming experiments: PLANCK, POLARBEAR, SPIDER, QUIET+CLOVER, and CMBPOL. We consider beam systematics that couple the beam substructure to the gradient of temperature anisotropy and polarization (differential beamwidth, pointing offsets and ellipticity) and beam systematics due to differential beam normalization (differential gain) and orientation (beam rotation) of the polarization-sensitive axes (the latter two effects are insensitive to the beam substructure). We determine allowable levels of beam systematics for given tolerances on the induced parameter errors and check for possible biases in the inferred parameters concomitant with potential increases in the statistical uncertainty. All our results are scaled to the “worst case scenario.” In this case, and for our tolerance levels the beam rotation should not exceed the few-degree to subdegree level, typical ellipticity is required to be 1%, the differential gain allowed level is a few parts in 103 to 104, differential beam width upper limits are of the subpercent level, and differential pointing should not exceed the few- to sub-arc sec level.

Miller, N. J.; Shimon, M.; Keating, B. G.

2009-03-01

162

A posteriori error estimator for finite volume methods  

Microsoft Academic Search

In this paper, first we discuss a technique to compare finite volume method and some well-known finite element methods, namely the dual mixed methods and nonconforming primal methods, for elliptic equations. These both equivalences are exploited to give us a posteriori error estimator for finite volume methods. This estimator is explicitly given, easy to compute and asymptotically exact without any

Abdellatif Agouzal; Fabienne Oudin

2000-01-01

163

Digital-PLL Assisted Frequency Estimation with Improved Error Variance  

Microsoft Academic Search

In this paper we present a novel frequency estimation technique, assisted by an imperfect second order arctan based Digital Phase-Locked Loop (D-PLL), for complex single sinusoidal signals in additive white Gaussian noise. The imperfect loop contains the frequency information in its phase error process, at steady state, which is then used to estimate the frequency after the signal has been

Kandeepan Sithamparanathan

2008-01-01

164

Estimation of regression coefficients in case of differentiable error processes  

Microsoft Academic Search

Consider the problem of estimating the coefficient ? in the regression model where the regression function f is similar to the covariance kernel R of the error process N, i.e., f is an element of the reproducing kernel Hilbert space associated with R. Conventional approaches discuss asymptotically optimal estimators if the kernel satisfies certain regularity conditions and if f is

Michael Weba

2006-01-01

165

Some A Posteriori Error Estimators for Elliptic Partial Differential Equations  

Microsoft Academic Search

. We present three new a posteriori error estimators in the energynorm for finite element solutions to elliptic partial differential equations. Theestimators are based on solving local Neumann problems in each element. Theestimators differ in how they enforce consistency of the Neumann problems.We prove that as the mesh size decreases, under suitable assumptions, two ofthe estimators approach upper bounds on

Randolph E. Bank; Alan Weiser

1985-01-01

166

Adaptive Error Estimation in Linearized Ocean General Circulation Models  

NASA Technical Reports Server (NTRS)

Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

Chechelnitsky, Michael Y.

1999-01-01

167

Analysis of the Systematic Errors Found in the Kipp & Zonen Large-Aperture Scintillometer  

NASA Astrophysics Data System (ADS)

Studies have shown a systematic error in the Kipp & Zonen large-aperture scintillometer (K&ZLAS) measurements of the sensible heat flux, H. We improved on these studies and compared four K&ZLASs with a Wageningen large-aperture scintillometer at the Chilbolton Observatory. The scintillometers were installed such that their footprints were the same and independent flux measurements were made along the measurement path. This allowed us to compare H and the direct scintillometer output, the refractive index structure parameter, {Cn2} . Furthermore, spectral analysis was performed on the raw scintillometer signal to investigate the characteristics of the error. Firstly, correlation coefficients ? 0.99 confirm the robustness of the scintillometer method, and secondly we discovered two systematic errors: the low-{Cn2} error and the high-{Cn2} error. The low-{Cn2} error is a non-linear error that is caused by high-frequency noise, and we suspect the error to be caused by the calibration circuit in the receiver. It varies between each K&ZLAS, is significant for H ? 50 W m-2, and we propose a solution to remove this error using the demodulated signal. The high-{Cn2} error identified by us is the systematic error found in previous studies. We suspect this error to be caused by poor focal alignment of the receiver detector and the transmitter light-emitting diode that causes ineffective use of the Fresnel lens in the current Kipp & Zonen design. It varies between each K&ZLAS (35% up to 240%) and can only be removed by comparing with a reference scintillometer in the field.

van Kesteren, B.; Hartogensis, O. K.

2011-03-01

168

Analysis of systematic error in “bead method” measurements of meteorite bulk volume and density  

NASA Astrophysics Data System (ADS)

The Archimedean glass bead method for determining meteorite bulk density has become widely applied. We used well characterized, zero-porosity quartz and topaz samples to determine the systematic error in the glass bead method to support bulk density measurements of meteorites for our ongoing meteorite survey. Systematic error varies according to bead size, container size and settling method, but in all cases is less than 3%, and generally less than 2%. While measurements using larger containers (above 150 cm 3) exhibit no discernible systematic error but much reduced precision, higher precision measurements with smaller containers do exhibit systematic error. For a 77 cm 3 container using 40-80 ?m diameter beads, the systematic error is effectively eliminated within measurement uncertainties when a "secured shake" settling method is employed in which the container is held securely to the shake platform during a 5 s period of vigorous shaking. For larger 700-800 ?m diameter beads using the same method, bulk volumes are uniformly overestimated by 2%. Other settling methods exhibit sample-volume-dependent biases. For all methods, reliability of measurement is severely reduced for samples below ˜5 cm 3 (10-15 g for typical meteorites), providing a lower-limit selection criterion for measurement of meteoritical samples.

Macke S. J., Robert J.; Britt, Daniel T.; Consolmagno S. J., Guy J.

2010-02-01

169

Bounded error parameter estimation: a sequential analytic center approach  

Microsoft Academic Search

In this paper, a sequential analytic-center approach for bounded error parameter estimation is proposed. The analytic center minimizes the “average” output error and allows an easy-to-compute sequential algorithm. With little computational effort two ellipsoids centered at the analytic center can be obtained as well: One inscribes and the other outscribes the so-called membership set. Finally, a sequential algorithm is presented

Er-Wei Bais; Yinyu Ye; Roberto Tempo

1997-01-01

170

Estimation of rod scale errors in geodetic leveling  

USGS Publications Warehouse

Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

Craymer, Michael R.; Vaní?ek, Petr; Castle, Robert O.

1995-01-01

171

SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.  

SciTech Connect

Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

2007-08-25

172

Verification of unfold error estimates in the unfold operator code  

SciTech Connect

Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

1997-01-01

173

Query Size Estimation for Joins Using Systematic Sampling  

Microsoft Academic Search

We propose a new approach to the estimation of query result sizes for join queries. The technique, which we have called “systematic sampling—SYSSMP”, is a novel variant of the sampling-based approach. A key novelty of the systematic sampling is that it exploits the sortedness of data; the result of this is that the sample relation obtained well represents the underlying

Anne H. H. Ngu; Banchong Harangsri; John Shepherd

2004-01-01

174

Systematic diffuse optical image errors resulting from uncertainty in the background optical properties  

NASA Astrophysics Data System (ADS)

We investigated the diffuse optical image errors resulting from systematic errors in the background scattering and absorption coefficients, Gaussian noise in the measurements, and the depth at which the image is reconstructed when using a 2D linear reconstruction algorithm for a 3D object. The fourth Born perturbation approach was used to generate reflectance measurements and k-space tomography was used for the reconstruction. Our simulations using both single and dual wavelengths show large systematic errors in the absolute reconstructed absorption coefficients and corresponding hemoglobin concentrations, while the errors in the relative oxy- and deoxy- hemoglobin concentrations are acceptable. The greatest difference arises from a systematic error in the depth at which an image is reconstructed. While an absolute reconstruction of the hemoglobin concentrations can deviate by 100% for a depth error of ñ1 mm, the error in the relative concentrations is less than 5%. These results demonstrate that while quantitative diffuse optical tomography is difficult, images of the relative concentrations of oxy- and deoxy-hemoglobin are accurate and robust. Other results, not presented, confirm that these findings hold for other linear reconstruction techniques (i.e. SVD and SIRT) as well as for transmission through slab geometries.

Cheng, Xuefeng; Boas, David A.

1999-04-01

175

Error estimates for CCMP ocean surface wind data sets  

NASA Astrophysics Data System (ADS)

The cross-calibrated, multi-platform (CCMP) ocean surface wind data sets are now available at the Physical Oceanography Distributed Active Archive Center from July 1987 through December 2010. These data support wide-ranging air-sea research and applications. The main Level 3.0 data set has global ocean coverage (within 78S-78N) with 25-kilometer resolution every 6 hours. An enhanced variational analysis method (VAM) quality controls and optimally combines multiple input data sources to create the Level 3.0 data set. Data included are all available RSS DISCOVER wind observations, in situ buoys and ships, and ECMWF analyses. The VAM is set up to use the ECMWF analyses to fill in areas of no data and to provide an initial estimate of wind direction. As described in an article in the Feb. 2011 BAMS, when compared to conventional analyses and reanalyses, the CCMP winds are significantly different in some synoptic cases, result in different storm statistics, and provide enhanced high-spatial resolution time averages of ocean surface wind. We plan enhancements to produce estimated uncertainties for the CCMP data. We will apply the method of Desroziers et al. for the diagnosis of error statistics in observation space to the VAM O-B, O-A, and B-A increments. To isolate particular error statistics we will stratify the results by which individual instruments were used to create the increments. Then we will use cross-validation studies to estimate other error statistics. For example, comparisons in regions of overlap for VAM analyses based on SSMI and QuikSCAT separately and together will enable estimating the VAM directional error when using SSMI alone. Level 3.0 error estimates will enable construction of error estimates for the time averaged data sets.

Atlas, R. M.; Hoffman, R. N.; Ardizzone, J.; Leidner, S.; Jusem, J.; Smith, D. K.; Gombos, D.

2011-12-01

176

Error estimation and adaptivity in Navier-Stokes incompressible flows  

NASA Astrophysics Data System (ADS)

An adaptive remeshing procedure for solving Navier-Stokes incompressible fluid flow problems is presented in this paper. This procedure has been implemented using the error estimator developed by Zienkiewicz and Zhu (1987, 1989) and a semi-implicit time-marching scheme for Navier-Stokes flow problems (Zienkiewicz et al. 1990). Numerical examples are presented, showing that the error estimation and adaptive procedure are capable of monitoring the flow field, updating the mesh when necessary, and providing nearly optimal meshes throughout the calculation, thus making the solution reliable and the computation economical and efficient.

Wu, J.; Zhu, J. Z.; Szmelter, J.; Zienkiewicz, O. C.

1990-07-01

177

Geodesy by radio interferometry - Effects of atmospheric modeling errors on estimates of baseline length  

NASA Technical Reports Server (NTRS)

Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.

Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.

1985-01-01

178

Systematic errors of EIT systems determined by easily-scalable resistive phantoms.  

PubMed

We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design. PMID:18544805

Hahn, G; Just, A; Dittmar, J; Hellige, G

2008-06-01

179

Iraq War mortality estimates: A systematic review  

Microsoft Academic Search

BACKGROUND: In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review

Christine Tapp; Frederick M Burkle Jr; Kumanan Wilson; Tim Takaro; Gordon H Guyatt; Hani Amad; Edward J Mills

2008-01-01

180

A Precise Error Bound for Quantum Phase Estimation  

PubMed Central

Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

Chappell, James M.; Lohe, Max A.; von Smekal, Lorenz; Iqbal, Azhar; Abbott, Derek

2011-01-01

181

Condition and Error Estimates in Numerical Matrix Computations  

SciTech Connect

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

Konstantinov, M. M. [University of Architecture, Civil Engineering and Geodesy, 1046 Sofia (Bulgaria); Petkov, P. H. [Technical University of Sofia, 1000 Sofia (Bulgaria)

2008-10-30

182

A sharp error estimate for the fast Gauss transform  

NASA Astrophysics Data System (ADS)

We report an error estimate of the multi-dimensional fast Gauss transform (FGT), which is much sharper than that previously reported in the literature. An application to the Karhunen-Loeve decomposition in the three-dimensional physical space is also presented that shows savings of three orders of magnitude in time and memory compared to a direct solver.

Wan, Xiaoliang; Karniadakis, George Em

2006-11-01

183

Probabilistic estimation of the concentration error of solar radiation  

Microsoft Academic Search

Mathematical relations are established for the estimation of the radiation concentration errors, based on the specificity of the orientation system and the geometric probability theory. The stopping point of the track and the final correction are correlated and a probability density function is obtained. The assymetric character of the dispersion of the reflected rays results from the analysis of the

V. Badescu

1981-01-01

184

Soft decision metric generation for QAM with channel estimation error  

Microsoft Academic Search

The channel code bit log likelihood ratio (LLR) for soft decision decoding is derived for quadrature amplitude modulated signals (QAM). The effect of imperfect channel knowledge on soft decision decoding performance is studied. Our results indicate this effect increases with channel estimation error and\\/or QAM modulation level. A metric based on generalized log likelihood ratio (GLLR) is derived for soft

Michael Mao Wang; Weimin Xiao; Tyler Brown

2002-01-01

185

Bit error rate estimation techniques for digital land mobile radios  

Microsoft Academic Search

Two methods are shown to provide reliable estimates of the raw channel BER (bit error rate) over channels which are characterized as having Rayleigh fading. In the first method, it is shown that pseudorandom noise sequences when interleaved within the transmitted data can be used to compute an autocorrelation parameter in the receiver from which the channel BER may be

Keith G. Cornett; Stephen B. Wicker

1991-01-01

186

Error Estimates for the Approximation of the Effective Hamiltonian  

SciTech Connect

We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting.

Camilli, Fabio [Univ. dell'Aquila, Dip. di Matematica Pura e Applicata (Italy)], E-mail: camilli@ing.univaq.it; Capuzzo Dolcetta, Italo [Univ. di Roma 'La Sapienza', Dip. di Matematica (Italy)], E-mail: capuzzo@mat.uniroma1.it; Gomes, Diogo A. [Instituto Superior Tecnico, Departamento de Matematica (Portugal)], E-mail: dgomes@math.ist.utl.pt

2008-02-15

187

Concise Formulas for the Standard Errors of Component Loading Estimates.  

ERIC Educational Resources Information Center

Derived formulas for the asymptotic standard errors of component loading estimates to cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. Used the formulas with a real correlation matrix of 355 subjects who took 12 psychological tests. (SLD)

Ogasawara, Haruhiko

2002-01-01

188

Multiframe selective information fusion from robust error estimation theory  

Microsoft Academic Search

A dynamic procedure for selective information fusion from multiple image frames is derived from robust error estimation theory. The fusion rate is driven by the anisotropic gain function, defined to be the difference between the Gaussian smoothed-edge maps of a given input frame and of an evolving synthetic output frame. The gain function achieves both selection and rapid fusion of

Sarah John; Mikhail A. Vorontsov

2005-01-01

189

A Relative Motion Estimation Using a Bounded Error Method  

Microsoft Academic Search

A bounded-error methodology applied to estimate relative displacements of an indoor mobile robot equipped with a laser range-finder that periodically delivers a range scan of the environment is presented. The method is based on the fusion between the range readings delivered by the laser and the feedback control input used to constraint the robot to move in its free workspace.

Alessandro Correa Victorino; Patrick Rives; Jean-Jacques Borrelly

2002-01-01

190

Prediction Error and Its Estimation for Subset-Selected Models  

Microsoft Academic Search

Strategies are compared for development of a linear regression model and the subsequent assessment of its predictive ability. Simulations were performed as a designed experiment over a range of data structures. Approaches using a forward selection of variables resulted in slightly smaller prediction errors and less biased estimators of predictive accuracy than all possible subsets selection but often did not

Ellen B. Roecker

1991-01-01

191

Instrumental Variable Estimation of Nonlinear Errors-in-Variables Models  

Microsoft Academic Search

In linear specifications, the bias due to the presence of measurement error in a regressor can be entirely avoided when either repeated measurements or instruments are available for the mismeasured regressor. The situation is more complex in nonlinear settings. While identification and root n consistent estimation of general nonlinear specifications have recently been proven in the presence of repeated measurements,

Susanne M. Schennach

2004-01-01

192

Bootstrap Standard Error Estimates in Dynamic Factor Analysis  

ERIC Educational Resources Information Center

Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

Zhang, Guangjian; Browne, Michael W.

2010-01-01

193

Estimating Filtering Errors Using the Peano Kernel Theorem  

SciTech Connect

The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

Jerome Blair

2008-03-01

194

TECHNICAL DESIGN NOTE: Elimination of systematic errors in two-mode laser telemetry  

NASA Astrophysics Data System (ADS)

We present a simple two-mode telemetry procedure which eliminates cyclic errors, to allow accurate absolute distance measurements. We show that phase drifts and cyclic errors are suppressed using a fast polarization switch that exchanges the roles of the reference and measurement paths. Preliminary measurements obtained using this novel design show a measurement stability better than 1 µm. Sources of residual noise and systematic errors are identified, and we expect that an improved but still simple version of the apparatus will allow accuracies in the nanometre range for absolute measurements of kilometre-scale distances.

Courde, C.; Lintz, M.; Brillet, A.

2009-12-01

195

Test models for improving filtering with model errors through stochastic parameter estimation  

SciTech Connect

The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

Gershgorin, B. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States); Harlim, J. [Department of Mathematics, North Carolina State University, NC 27695 (United States)], E-mail: jharlim@ncsu.edu; Majda, A.J. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States)

2010-01-01

196

Influence of systematic error on least squares retrieval of upper atmospheric parameters from the ultraviolet airglow  

Microsoft Academic Search

This paper investigates the effect of simple systematic error, or bias (i.e., in the magnitude of data or an associated model), on physical parameters retrieved by least squares algorithms from observations that are indexed by an independent variable. This factor is now of critical interest with the advent of global, space-based ultraviolet remote sensing of thermospheric and ionospheric composition by

J. M. Picone

2008-01-01

197

Uncertainty in GPS Networks due to Remaining Systematic Errors: The Interval Approach  

Microsoft Academic Search

A realistic assessment of the total uncertainty budget of Global Positioning System (GPS) observations and its adequate mathematical treatment is a basic requirement for all analysis and interpretation of GPS-derived point positions, in particular GPS heights, and their respective changes. This implies not only the random variability but also the remaining systematic errors. At present in geodesy, the main focus

S. Schön; H. Kutterer

2006-01-01

198

A systematic literature review to identify and classify software requirement errors  

Microsoft Academic Search

Most software quality research has focused on identifying faults (i.e., information is incorrectly recorded in an artifact). Because software still exhibits incorrect behavior, a different approach is needed. This paper presents a systematic literature review to develop taxonomy of errors (i.e., the sources of faults) that may occur during the requirements phase of software lifecycle. This taxonomy is designed to

Gursimran Singh Walia; Jeffrey C. Carver

2009-01-01

199

Moderate alcohol use and reduced mortality risk: Systematic error in prospective studies  

Microsoft Academic Search

The majority of prospective studies on alcohol use and mortality risk indicates that abstainers are at increased risk of mortality from both all causes and coronary heart disease (CHD). This meta-analysis of 54 published studies tested the extent to which a systematic misclassification error was committed by including as 'abstainers' many people who had reduced or stopped drinking, a phenomenon

Kaye Middleton Fillmore; William C. Kerr; Tim Stockwell; Tanya Chikritzhs; Alan Bostrom

2006-01-01

200

Analysis of systematic errors of the ASM/RXTE monitor and GT-48 ?-ray telescope  

NASA Astrophysics Data System (ADS)

The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 ?-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg ?-2, according to the Crimean version) were used and the stationary nature of its ?-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 ?-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.

Fidelis, V. V.

2011-06-01

201

Temporal correlations of atmospheric mapping function errors in GPS estimation  

NASA Astrophysics Data System (ADS)

The developments in global satellite navigation using GPS, GLONASS, and Galileo will yield more observations at various elevation angles. The inclusion of data acquired at low elevation angles allows for geometrically stronger solutions. The vertical coordinate estimate of a GPS site is one of the parameters affected by the elevation-dependent error sources, especially the atmospheric corrections, whose proper description becomes necessary. In this work, we derive time-series of normalized propagation delays in the neutral atmosphere using ray tracing of radiosonde data, and compare these to the widely used new mapping functions (NMF) and improved mapping functions (IMF). Performance analysis of mapping functions is carried out in terms of bias and uncertainty introduced in the vertical coordinate. Simulation runs show that time-correlated mapping errors introduce vertical coordinate RMS errors as large as 4 mm for an elevation cut-off angle of 5°. When simulation results are compared with a geodetic GPS solution, the variations in the vertical coordinate due to mapping errors for an elevation cut-off of 5° are similar in magnitude to those caused by all error sources combined at 15° cut-off. This is significant for the calculation of the error budget in geodetic GPS applications. The results presented here are valid for a limited area in North Europe, but the technique is applicable to any region provided that radiosonde data are available.

Stoew, Borys; Nilsson, Tobias; Elgered, Gunnar; Jarlemark, Per O. J.

2007-05-01

202

Error estimates and specification parameters for functional renormalization  

SciTech Connect

We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)

2013-07-15

203

Assessing systematic errors in GOSAT CO2 retrievals by comparing assimilated fields to independent CO2 data  

NASA Astrophysics Data System (ADS)

Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.

Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

2012-12-01

204

Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics  

NASA Technical Reports Server (NTRS)

Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

2002-01-01

205

Discretization error estimation and exact solution generation using the method of nearby problems.  

SciTech Connect

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)

2011-10-01

206

Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation  

NASA Technical Reports Server (NTRS)

Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

207

Stress Recovery and Error Estimation for Shell Structures  

NASA Technical Reports Server (NTRS)

The Penalized Discrete Least-Squares (PDLS) stress recovery (smoothing) technique developed for two dimensional linear elliptic problems is adapted here to three-dimensional shell structures. The surfaces are restricted to those which have a 2-D parametric representation, or which can be built-up of such surfaces. The proposed strategy involves mapping the finite element results to the 2-D parametric space which describes the geometry, and smoothing is carried out in the parametric space using the PDLS-based Smoothing Element Analysis (SEA). Numerical results for two well-known shell problems are presented to illustrate the performance of SEA/PDLS for these problems. The recovered stresses are used in the Zienkiewicz-Zhu a posteriori error estimator. The estimated errors are used to demonstrate the performance of SEA-recovered stresses in automated adaptive mesh refinement of shell structures. The numerical results are encouraging. Further testing involving more complex, practical structures is necessary.

Yazdani, A. A.; Riggs, H. R.; Tessler, A.

2000-01-01

208

Divergent estimation error in portfolio optimization and in linear regression  

NASA Astrophysics Data System (ADS)

The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

Kondor, I.; Varga-Haszonits, I.

2008-08-01

209

Sensitivity to Estimation Errors in Mean-variance Models  

Microsoft Academic Search

\\u000a Abstract\\u000a   In order to give a complex and accurate description about the sensitivity of efficient portfolios to changes in asset's expected\\u000a returns, variances and covariances, the joint effect of estimation errors in means, variances and covariances on the efficient\\u000a portfolio's weights is investigated in this paper. It is proved that the efficient portfolio's composition is a Lipschitz\\u000a continuous, differentiable mapping

Zhi-ping Chen; Cai-e Zhao

2003-01-01

210

Temporal correlations of atmospheric mapping function errors in GPS estimation  

Microsoft Academic Search

The developments in global satellite navigation using GPS, GLONASS, and Galileo will yield more observations at various elevation\\u000a angles. The inclusion of data acquired at low elevation angles allows for geometrically stronger solutions. The vertical coordinate\\u000a estimate of a GPS site is one of the parameters affected by the elevation-dependent error sources, especially the atmospheric\\u000a corrections, whose proper description becomes

Borys Stoew; Tobias Nilsson; Gunnar Elgered; Per O. J. Jarlemark

2007-01-01

211

Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons  

PubMed Central

In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach ?.

Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

2012-01-01

212

A possible solution for the problem of estimating the error structure of global soil moisture data sets  

NASA Astrophysics Data System (ADS)

In the last few years, research made significant progress towards operational soil moisture remote sensing which lead to the availability of several global data sets. For an optimal use of these data, an accurate estimation of the error structure is an important condition. To solve for the validation problem we introduce the triple collocation error estimation technique. The triple collocation technique is a powerful tool to estimate the root mean square error while simultaneously solving for systematic differences in the climatologies of a set of three independent data sources. We evaluate the method by applying it to a passive microwave (TRMM radiometer) derived, an active microwave (ERS-2 scatterometer) derived and a modeled (ERA-Interim reanalysis) soil moisture data sets. The results suggest that the method provides realistic error estimates.

Scipal, K.; Holmes, T.; de Jeu, R.; Naeimi, V.; Wagner, W.

2008-12-01

213

Analysis and reduction of tropical systematic errors through a unified modelling strategy  

NASA Astrophysics Data System (ADS)

Systematic errors in climate models are usually addressed in a number of ways, but current methods often make use of model climatological fields as a starting point for model modification. This approach has limitations due to non-linear feedback mechanisms which occur over longer timescales and make the source of the errors difficult to identify. In a unified modelling environment, short-range (1-5 day) weather forecasts are readily available from NWP models with very similar dynamical and physical formulations to the climate models, but often increased horizontal (and vertical) resolution. Where such forecasts exhibit similar systematic errors to their climate model counterparts, there is much to be gained from combined analysis and sensitivity testing. For example, the Met Office Hadley Centre climate model HadGEM1 (Johns et al 2007) exhibits precipitation errors in the Asian summer monsoon, with too little rainfall over the Indian peninsula and too much over the equatorial Indian Ocean to the southwest of the peninsula (Martin et al., 2004). Examination of the development of precipitation errors in the Asian summer monsoon region in Met Office NWP forecasts shows that different parts of the error pattern evolve on different timescales. Excessive rainfall over the equatorial Indian Ocean to the southwest of the Indian peninsula develops rapidly, over the first day or two of the forecast, while a dry bias over the Indian land area takes ~10 days to develop. Such information is invaluable for understanding the processes involved and how to tackle them. Other examples of the use of this approach will be discussed, including analysis of the sensitivity of the representation of the Madden-Julian Oscillation (MJO) to the convective parametrisation, and the reduction of systematic tropical temperature and moisture biases in both climate and NWP models through improved representation of convective detrainment.

Copsey, D.; Marshall, A.; Martin, G.; Milton, S.; Senior, C.; Sellar, A.; Shelly, A.

2009-04-01

214

Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors.  

PubMed

Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

2014-01-01

215

DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets  

SciTech Connect

Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2009-12-16

216

Gradient recovery type a posteriori error estimate for finite element approximation on non-uniform meshes  

Microsoft Academic Search

In this paper, we derive gradient recovery type a posteriori error estimate for the finite element approximation of elliptic equations. We show that a posteriori error estimate provide both upper and lower bounds for the discretization error on the non-uniform meshes. Moreover, it is proved that a posteriori error estimate is also asymptotically exact on the uniform meshes if the

Liu Du; Ningning Yan

2001-01-01

217

Multiscale a posteriori error estimation and mesh adaptivity for reliable finite element analysis  

Microsoft Academic Search

The focus of this thesis is on reliable finite element simulations using mesh adaptivity based on a posteriori error estimation. The accuracy of the error estimator is a key step in controlling both the computational error and simulation time. The estimated errors guide the mesh adaptivity algorithm toward a quasi-optimal mesh that conforms with the solution specific features. The simulation

Ahmed H ElSheikh

2007-01-01

218

Systematic error sources in a measurement of G using a cryogenic torsion pendulum  

NASA Astrophysics Data System (ADS)

This dissertation attempts to explore and quantify systematic errors that arise in a measurement of G (the gravitational constant from Newton's Law of Gravitation) using a cryogenic torsion pendulum. It begins by exploring the techniques frequently used to measure G with a torsion pendulum, features of the particular method used at UC Irvine, and the motivations behind those features. It proceeds to describe the particular apparatus used in the UCI G measurement, and the formalism involved in a gravitational torsion pendulum experiment. It then describes and quantifies the systematic errors that have arisen, particularly those that arise from the torsion fiber and from the influence of ambient background gravitational, electrostatic, and magnetic fields. The dissertation concludes by presenting the value of G that the lab has reported.

Cross, William Daniel

219

Random and systematic error analysis in the complex permittivity measurements of high dielectric strength thermoplastics  

Microsoft Academic Search

This paper presents the complex dielectric permittivity and loss tangent measurements for a selection of advanced polymer-based thermoplastics in the Q-band, V-band and W-band frequencies and discusses in detail the random and systematic errors that arise in the experimental setup. These plastics are reported to have exceptional mechanical, thermal and electrical properties and are extensively used as electrical insulating materials,

Nahid Rahman; Ana I. Medina Ayala; Konstantin A. Korolev; Mohammed N. Afsar; Rudy Cheung; Maurice Aghion

2009-01-01

220

Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model  

NASA Technical Reports Server (NTRS)

This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

2001-01-01

221

Improved Soundings and Error Estimates using AIRS/AMSU Data  

NASA Technical Reports Server (NTRS)

AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

Susskind, Joel

2006-01-01

222

Systematic and random errors in ion affinities and activation entropies from the extended kinetic method.  

PubMed

An evaluation of the extended kinetic method with full entropy analysis was conducted using RRKM theory to simulate data for collision-induced dissociation under single-collision conditions. A rigorous method for analyzing kinetic method data, orthogonal distance regression, is introduced and compared with previous methods in the literature. The results demonstrate that the use of the extended kinetic method is definitely superior to the standard kinetic method, but final ion affinities and activation entropies differ intrinsically from the correct values. Considering the effects of both systematic and random error in Monte Carlo simulations of the full entropy analysis, error distributions of +/-4 to +/-12 kJ mol(-1) for ion affinities and of +/-9 to +/-30 J mol(-1) K(-1) for activation entropy differences are found (+/-2 standard deviations of the sample populations). The systematic errors in ion affinities are larger for systems with large activation entropy differences. These uncertainties do not include any error in the absolute calibration of the reference ion affinity scale. We argue that application of an empirical correction factor is inadvisable. PMID:15386748

Ervin, Kent M; Armentrout, P B

2004-09-01

223

Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors  

NASA Technical Reports Server (NTRS)

Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.

Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

2007-01-01

224

Verification of unfold error estimates in the UFO code  

SciTech Connect

Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

Fehl, D.L.; Biggs, F.

1996-07-01

225

Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models  

NASA Astrophysics Data System (ADS)

Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more successful CGCM performance, as the effects of remote biases may override them. Therefore, efforts to reduce CGCM errors cannot be narrowly focused on particular regions.

Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

2014-05-01

226

Erasing Errors due to Alignment Ambiguity When Estimating Positive Selection.  

PubMed

Current estimates of diversifying positive selection rely on first having an accurate multiple sequence alignment. Simulation studies have shown that under biologically plausible conditions, relying on a single estimate of the alignment from commonly used alignment software can lead to unacceptably high false-positive rates in detecting diversifying positive selection. We present a novel statistical method that eliminates excess false positives resulting from alignment error by jointly estimating the degree of positive selection and the alignment under an evolutionary model. Our model treats both substitutions and insertions/deletions as sequence changes on a tree and allows site heterogeneity in the substitution process. We conduct inference starting from unaligned sequence data by integrating over all alignments. This approach naturally accounts for ambiguous alignments without requiring ambiguously aligned sites to be identified and removed prior to analysis. We take a Bayesian approach and conduct inference using Markov chain Monte Carlo to integrate over all alignments on a fixed evolutionary tree topology. We introduce a Bayesian version of the branch-site test and assess the evidence for positive selection using Bayes factors. We compare two models of differing dimensionality using a simple alternative to reversible-jump methods. We also describe a more accurate method of estimating the Bayes factor using Rao-Blackwellization. We then show using simulated data that jointly estimating the alignment and the presence of positive selection solves the problem with excessive false positives from erroneous alignments and has nearly the same power to detect positive selection as when the true alignment is known. We also show that samples taken from the posterior alignment distribution using the software BAli-Phy have substantially lower alignment error compared with MUSCLE, MAFFT, PRANK, and FSA alignments. PMID:24866534

Redelings, Benjamin

2014-08-01

227

Charge equalizing and error estimation in position sensitive neutron detectors  

NASA Astrophysics Data System (ADS)

The conventional design of detector electronics for resistive wire position sensitive neutron detectors (PSD) using of-the-shelf charge sensitive preamplifiers and gauss-pulse shaping devices causes uncertainty in spatial position and resolution due to charge equalization between both ends of the resistive PSD. This paper describes in detail charge equalization effects and their drawbacks for a PSD system design. Error estimations for a conventional design are given. Attained results should help to better understand PSD electronics design and to provide data for system optimization.

Bönisch, Sven P.; Namaschk, Bernhard; Wulf, Friedrich

2007-01-01

228

Local error estimates for adaptive simulation of the reaction–diffusion master equation via operator splitting  

NASA Astrophysics Data System (ADS)

The efficiency of exact simulation methods for the reaction–diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.

Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda

2014-06-01

229

Estimating the error in simulation prediction over the design space  

SciTech Connect

This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.

Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)

2003-01-01

230

Are Low-order Covariance Estimates Useful in Error Analyses?  

NASA Astrophysics Data System (ADS)

Atmospheric trace gas inversions, using modeled atmospheric transport to infer surface sources and sinks from measured concentrations, are most commonly done using least-squares techniques that return not only an estimate of the state (the surface fluxes) but also the covariance matrix describing the uncertainty in that estimate. Besides allowing one to place error bars around the estimate, the covariance matrix may be used in simulation studies to learn what uncertainties would be expected from various hypothetical observing strategies. This error analysis capability is routinely used in designing instrumentation, measurement campaigns, and satellite observing strategies. For example, Rayner, et al (2002) examined the ability of satellite-based column-integrated CO2 measurements to constrain monthly-average CO2 fluxes for about 100 emission regions using this approach. Exact solutions for both state vector and covariance matrix become computationally infeasible, however, when the surface fluxes are solved at finer resolution (e.g., daily in time, under 500 km in space). It is precisely at these finer scales, however, that one would hope to be able to estimate fluxes using high-density satellite measurements. Non-exact estimation methods such as variational data assimilation or the ensemble Kalman filter could be used, but they achieve their computational savings by obtaining an only approximate state estimate and a low-order approximation of the true covariance. One would like to be able to use this covariance matrix to do the same sort of error analyses as are done with the full-rank covariance, but is it correct to do so? Here we compare uncertainties and `information content' derived from full-rank covariance matrices obtained from a direct, batch least squares inversion to those from the incomplete-rank covariance matrices given by a variational data assimilation approach solved with a variable metric minimization technique (the Broyden-Fletcher- Goldfarb-Shanno algorithm). Two cases are examined: a toy problem in which CO2 fluxes for 3 latitude bands are estimated for only 2 time steps per year, and for the monthly fluxes for 22 regions across 1988-2003 solved for in the TransCom3 interannual flux inversion of Baker, et al (2005). The usefulness of the uncertainty estimates will be assessed as a function of the number of minimization steps used in the variational approach; this will help determine whether they will also be useful in the high-resolution cases that we would most like to apply the non-exact methods to. Baker, D.F., et al., TransCom3 inversion intercomparison: Impact of transport model errors on the interannual variability of regional CO2 fluxes, 1988-2003, Glob. Biogeochem. Cycles, doi:10.1029/2004GB002439, 2005, in press. Rayner, P.J., R.M. Law, D.M. O'Brien, T.M. Butler, and A.C. Dilley, Global observations of the carbon budget, 3, Initial assessment of the impact of satellite orbit, scan geometry, and cloud on measuring CO2 from space, J. Geophys. Res., 107(D21), 4557, doi:10.1029/2001JD000618, 2002.

Baker, D. F.; Schimel, D.

2005-12-01

231

Error estimation for mixed summation-integral type operators  

NASA Astrophysics Data System (ADS)

Srivastava and Gupta [H.M. Srivastava, V. Gupta, A certain family of summation integral type operators, Math. Comput. Modelling 37 (2003) 1307-1315] proposed a certain family of summation integral type operators and estimated the rate of convergence for bounded variation functions. Very recently Gupta et al. [V. Gupta, R.N. Mohapatra, Z. Finta, On certain family of mixed summation-integral type operators, Math. Comput. Modelling, in press] defined the mixed summation integral type operators and obtained some direct results in simultaneous approximation. In the present paper, we consider the linear combinations of the mixed summation integral type operators and establish the error estimation for simultaneous approximation in terms of higher order modulus of continuity. To prove the main result we use the technique of linear approximating method viz. Steklov mean.

Gupta, Vijay

2006-01-01

232

Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation  

NASA Technical Reports Server (NTRS)

Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

Morelli, Eugene a.

2006-01-01

233

Random and systematic measurement errors in acoustic impedance as determined by the transmission line method  

NASA Technical Reports Server (NTRS)

The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.

Parrott, T. L.; Smith, C. D.

1977-01-01

234

The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors  

SciTech Connect

Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

2010-07-15

235

Distance-Dependent Filtering of Background Error Covariance Estimates in an Ensemble Kalman Filter  

Microsoft Academic Search

The usefulness of a distance-dependent reduction of background error covariance estimates in an ensemble Kalman filter is demonstrated. Covariances are reduced by performing an elementwise multiplication of the background error covariance matrix with a correlation function with local support. This reduces noisiness and results in an improved background error covariance estimate, which generates a reduced-error ensemble of model initial conditions.

Thomas M. Hamill; Jeffrey S. Whitaker; Chris Snyder

2001-01-01

236

A function space approach to smoothing with applications to model error estimation for flexible spacecraft control  

NASA Technical Reports Server (NTRS)

A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

Rodriguez, G.

1981-01-01

237

Models and error analyses in urban air quality estimation  

NASA Technical Reports Server (NTRS)

Estimation theory has been applied to a wide range of aerospace problems. Application of this expertise outside the aerospace field has been extremely limited, however. This paper describes the use of covariance error analysis techniques in evaluating the accuracy of pollution estimates obtained from a variety of concentration measuring devices. It is shown how existing software developed for aerospace applications can be applied to the estimation of pollution through the processing of measurement types involving a range of spatial and temporal responses. The modeling of pollutant concentration by meandering Gaussian plumes is described in some detail. Time averaged measurements are associated with a model of the average plume, using some of the same state parameters and thus avoiding the problem of state memory. The covariance analysis has been implemented using existing batch estimation software. This usually involves problems in handling dynamic noise; however, the white dynamic noise has been replaced by a band-limited process which can be easily accommodated by the software.

Englar, T., Jr.; Diamante, J. M.; Jazwinski, A. H.

1976-01-01

238

Local a Posteriori Error Estimation on Thin Domains of Linear Elasticity: Theory and an Example.  

National Technical Information Service (NTIS)

The authors discuss the problem of a posteriori error estimation in the FE-approximation of parameter dependent 'thin' problems of linear elasticity. They discuss a posteriori error estimation based directly on the energy and complementary energy (dual) p...

O. Ovaskainen

1997-01-01

239

Systematic Errors Associated with the CPMG Pulse Sequence and Their Effect on Motional Analysis of Biomolecules  

NASA Astrophysics Data System (ADS)

A theoretical approach to calculate the time evolution of magnetization during a CPMG pulse sequence of arbitrary parameter settings is developed and verified by experiment. The analysis reveals that off-resonance effects can cause systematic reductions in measured peak amplitudes that commonly lie in the range 5-25%, reaching 50% in unfavorable circumstances. These errors, which are finely dependent upon frequency offset and CPMG parameter settings, are subsequently transferred into erroneous T2values obtained by curve fitting, where they are reduced or amplified depending upon the magnitude of the relaxation time. Subsequent transfer to Lipari-Szabo model analysis can produce significant errors in derived motional parameters, with ? einternal correlation times being affected somewhat more than S2order parameters. A hazard of this off-resonance phenomenon is its oscillatory nature, so that strongly affected and unaffected signals can be found at various frequencies within a CPMG spectrum. Methods for the reduction of the systematic error are discussed. Relaxation studies on biomolecules, especially at high field strengths, should take account of potential off-resonance contributions.

Ross, A.; Czisch, M.; King, G. C.

1997-02-01

240

Surface air temperature simulations by AMIP general circulation models: Volcanic and ENSO signals and systematic errors  

SciTech Connect

Thirty surface air temperature simulations for 1979--88 by 29 atmospheric general circulation models are analyzed and compared with the observations over land. These models were run as part of the Atmospheric Model Intercomparison Project (AMIP). Several simulations showed serious systematic errors, up to 4--5 C, in globally averaged land air temperature. The 16 best simulations gave rather realistic reproductions of the mean climate and seasonal cycle of global land air temperature, with an average error of {minus}0.9 C for the 10-yr period. The general coldness of the model simulations is consistent with previous intercomparison studies. The regional systematic errors showed very large cold biases in areas with topography and permanent ice, which implies a common deficiency in the representation of snow-ice albedo in the diverse models. The SST and sea ice specification of climatology rather than observations at high latitudes for the first three years (1979--81) caused a noticeable drift in the neighboring land air temperature simulations, compared to the rest of the years (1982--88). Unsuccessful simulation of the extreme warm (1981) and cold (1984--85) periods implies that some variations are chaotic or unpredictable, produced by internal atmospheric dynamics and not forced by global SST patterns.

Mao, J.; Robock, A. [Univ. of Maryland, College Park, MD (United States). Dept. of Meteorology] [Univ. of Maryland, College Park, MD (United States). Dept. of Meteorology

1998-07-01

241

An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.  

USGS Publications Warehouse

Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

Castle, R. O.; Brown, Jr, B. W.; Gilmore, T. D.; Mark, R. K.; Wilson, R. C.

1983-01-01

242

Estimating the coverage of mental health programmes: a systematic review  

PubMed Central

Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys.

De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

2014-01-01

243

Pencil kernel correction and residual error estimation for quality-index-based dose calculations  

Microsoft Academic Search

Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent

Tufve Nyholm; Jörgen Olofsson; Anders Ahnesjö; Dietmar Georg; Mikael Karlsson

2006-01-01

244

Gradient recovery type a posteriori error estimates for finite element approximations on irregular meshes  

Microsoft Academic Search

In this paper, the gradient recovery type a posteriori error estimators for finite element approximations are proposed for irregular meshes. Both the global and the local a posteriori error estimates are derived. Moreover, it is shown that the a posteriori error estimates is asymptotically exact on where the mesh is regular enough and the exact solution is smooth.

Ningning Yan; Aihui Zhou

2001-01-01

245

Avoiding Bias in Estimates of the Effect of Data Error on Allocations of Public Funds  

Microsoft Academic Search

Commonly used techniques for estimating the effect of data error on fund allocations are biased because they fail to account for numerous sources of data error and for misspecification of the allocation formula itself. Alternative techniquesfor estimating the effect of data error are presented and applied to estimate the effect of census undercount on general revenue sharing allocations.

Bruce D. Spencer

1985-01-01

246

Study of systematic errors of bound-state parameters in SVZ sum rules  

SciTech Connect

We study systematic errors of the ground-state parameters obtained from Shifman-Vainshtein-Zakharov sum rules, making use of the harmonic-oscillator potential model as an example. In this case, one knows the exact solution for the polarization operator, which allows one to obtain both the OPE to any order and the parameters (masses and decay constants) of the bound states. We determine the parameters of the ground state making use of the standard procedures of the method of sum rules and compare the obtained results with the known exact values. We show that, in the situation when the continuum contribution to the polarization operator is not known and is modeled by an effective continuum, the method of sum rules does not allow one to control the systematic uncertainties of the extracted ground-state parameters.

Lucha, W.; Melikhov, D. I. [Austrian Academy of Sciences, Institute for High Energy Physics (Austria); Simula, S. [INFN (Italy)

2008-08-15

247

Systematic errors of bound-state parameters obtained with SVZ sum rules  

SciTech Connect

We study systematic errors of the ground-state parameters obtained by Shifman-Vainshtein-Zakharov (SVZ) sum rules, making use of the harmonic-oscillator potential model as an example. In this case, one knows the exact solution for the polarization operator, which allows one to obtain both the OPE to any order and the parameters (masses and decay constants) of the bound states. We determine the parameters of the ground state making use of the standard procedures of the method of sum rules, and compare the obtained results with the known exact values. We show that in the situation when the continuum contribution to the polarization operator is not known and is modelled by an effective continuum, the method of sum rules does not allow to control the systematic uncertainties of the extracted ground-state parameters.

Lucha, Wolfgang [HEPHY, Austrian Academy of Sciences, Nikolsdorfergasse 18, A-1050 Vienna (Austria); Melikhov, Dmitri [HEPHY, Austrian Academy of Sciences, Nikolsdorfergasse 18, A-1050 Vienna (Austria); Institute of Nuclear Physics, Moscow State University, 119992, Moscow (Russian Federation); Simula, Silvano [INFN, Sezione di Roma 3, Via della Vasca Navale 84, I-00146, Rome (Italy)

2007-11-19

248

Determination of Sound Power Level and Systematic Errors. Swedish Council for Work Life Research 1997-1612.  

National Technical Information Service (NTIS)

Some of the basic international standards for determination of sound power levels contain known systematic errors which are not dealt with. In particular, the standards which base the determination of sound power level on sound pressure level measurements...

H. G. Jonasson

1998-01-01

249

Nonlocal treatment of systematic errors in the processing of sparse and incomplete sensor data  

SciTech Connect

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse and incomplete sensor data. We present a detailed application of this methodology to the construction of navigation maps from wide-angle sonar sensor data acquired by the HERMIES IIB mobile robot. Our uncertainty approach is explcitly nonlocal. We use a binary labelling scheme and a simple logic for the rule of combination. We then correct erroneous interpretations of the data by analyzing pixel patterns of conflict and by imposing consistent labelling conditions. 9 refs., 6 figs.

Beckerman, M.; Oblow, E.M.

1988-03-01

250

Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources  

NASA Technical Reports Server (NTRS)

Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

1999-01-01

251

Close-range radar rainfall estimation and error analysis  

NASA Astrophysics Data System (ADS)

It is well-known that quantitative precipitation estimation (QPE) is affected by many sources of error. The most important of these are 1) radar calibration, 2) wet radome attenuation, 3) rain attenuation, 4) vertical profile of reflectivity, 5) variations in drop size distribution, and 6) sampling effects. The study presented here is an attempt to separate and quantify these sources of error. For this purpose, QPE is performed very close to the radar (~1-2 km) so that 3), 4), and 6) will only play a minor role. Error source 5) can be corrected for because of the availability of two disdrometers (instruments that measure the drop size distribution). A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm in De Bilt, The Netherlands is analyzed. Radar, rain gauge, and disdrometer data from De Bilt are used for this. It is clear from the analyses that without any corrections, the radar severely underestimates the total rain amount (only 25 mm). To investigate the effect of wet radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation up to ~4 dB. The calibration of the radar is checked by looking at received power from the sun. This turns out to cause another 1 dB of underestimation. The effect of variability of drop size distributions is shown to cause further underestimation. Correcting for all of these effects yields a good match between radar QPE and gauge measurements.

van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

2012-04-01

252

Padé error estimates for the logarithm of a matrix  

Microsoft Academic Search

The error of Padé approximations to the logarithm of a matrix and related hypergeometric functions is analysed. By obtaining an exact error expansion with positive coefficients, it is shown that the error in the matrix approximation at X is always less than the scalar approximation error at x, when ?X? < x. A more detailed analysis, involving the interlacing properties

CHARLES KENNEY; ALAN J. LAUB

1989-01-01

253

Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System  

NASA Technical Reports Server (NTRS)

The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

1985-01-01

254

Using image area to control CCD systematic errors in spaceborne photometric and astrometric time-series measurements  

NASA Technical Reports Server (NTRS)

The effect of some systematic errors for high-precision time-series spaceborne photometry and astrometry has been investigated with a CCD as the detector. The 'pixelization' of the images causes systematic error in astrometric measurements. It is shown that this pixelization noise scales as image radius r exp -3/2. Subpixel response gradients, not correctable by the 'flat field', and in conjunction with telescope pointing jitter, introduce further photometric and astrometric errors. Subpixel gradients are modeled using observed properties of real flat fields. These errors can be controlled by having an image span enough pixels. Large images are also favored by CCD dynamic range considerations. However, magnified stellar images can overlap, thus introducing another source of systematic error. An optimum image size is therefore a compromise between these competing factors.

Buffington, Andrew; Booth, Corwin H.; Hudson, Hugh S.

1991-01-01

255

A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation  

NASA Technical Reports Server (NTRS)

A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

Simon, Donald L.; Garg, Sanjay

2009-01-01

256

CTER-rapid estimation of CTF parameters with error assessment.  

PubMed

In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03Å without, and 3.85Å with, inclusion of astigmatism parameters. PMID:24562077

Penczek, Pawel A; Fang, Jia; Li, Xueming; Cheng, Yifan; Loerke, Justus; Spahn, Christian M T

2014-05-01

257

Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance  

Microsoft Academic Search

Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if

John M Hickey; Roel F Veerkamp; Mario PL Calus; Han A Mulder; Robin Thompson

2009-01-01

258

Model Error Estimation for the CPTEC Eta Model  

NASA Technical Reports Server (NTRS)

Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

Tippett, Michael K.; daSilva, Arlindo

1999-01-01

259

Anisotropic discretization and model-error estimation in solid mechanics by local Neumann problems  

Microsoft Academic Search

First, a survey of existing residuum-based error-estimators and error-indicators is given. Generally, residual error estimators (which have at least upper bound in contrast to indicators) can be locally computed from residua of equilibrium and stress-jumps at element interfaces using Dirichlet or Neumann conditions for element patches or individual elements (REM). Another equivalent method for error estimation can be derived from

E. Stein; S. Ohnimus

1999-01-01

260

Interventions to reduce medication errors in adult intensive care: a systematic review  

PubMed Central

Critically ill patients need life saving treatments and are often exposed to medications requiring careful titration. The aim of this paper was to review systematically the research literature on the efficacy of interventions in reducing medication errors in intensive care. A search was conducted of PubMed, CINAHL EMBASE, Journals@Ovid, International Pharmaceutical Abstract Series via Ovid, ScienceDirect, Scopus, Web of Science, PsycInfo and The Cochrane Collaboration from inception to October 2011. Research studies involving delivery of an intervention in intensive care for adult patients with the aim of reducing medication errors were examined. Eight types of interventions were identified: computerized physician order entry (CPOE), changes in work schedules (CWS), intravenous systems (IS), modes of education (ME), medication reconciliation (MR), pharmacist involvement (PI), protocols and guidelines (PG) and support systems for clinical decision making (SSCD). Sixteen out of the 24 studies showed reduced medication error rates. Four intervention types demonstrated reduced medication errors post-intervention: CWS, ME, MR and PG. It is not possible to promote any interventions as positive models for reducing medication errors. Insufficient research was undertaken with any particular type of intervention, and there were concerns regarding the level of evidence and quality of research. Most studies involved single arm, before and after designs without a comparative control group. Future researchers should address gaps identified in single faceted interventions and gather data on multi-faceted interventions using high quality research designs. The findings demonstrate implications for policy makers and clinicians in adopting resource intensive processes and technologies, which offer little evidence to support their efficacy.

Manias, Elizabeth; Williams, Allison; Liew, Danny

2012-01-01

261

Kriging regression of PIV data using a local error estimate  

NASA Astrophysics Data System (ADS)

The objective of the method described in this work is to provide an improved reconstruction of an original flow field from experimental velocity data obtained with particle image velocimetry (PIV) technique, by incorporating the local accuracy of the PIV data. The postprocessing method we propose is Kriging regression using a local error estimate (Kriging LE). In Kriging LE, each velocity vector must be accompanied by an estimated measurement uncertainty. The performance of Kriging LE is first tested on synthetically generated PIV images of a two-dimensional flow of four counter-rotating vortices with various seeding and illumination conditions. Kriging LE is found to increase the accuracy of interpolation to a finer grid dramatically at severe reflection and low seeding conditions. We subsequently apply Kriging LE for spatial regression of stereo-PIV data to reconstruct the three-dimensional wake of a flapping-wing micro air vehicle. By qualitatively comparing the large-scale vortical structures, we show that Kriging LE performs better than cubic spline interpolation. By quantitatively comparing the interpolated vorticity to unused measurement data at intermediate planes, we show that Kriging LE outperforms conventional Kriging as well as cubic spline interpolation.

de Baar, Jouke H. S.; Percin, Mustafa; Dwight, Richard P.; van Oudheusden, Bas W.; Bijl, Hester

2014-01-01

262

Apparent anomalies in nuclear feulgen-DNA contents. Role of systematic microdensitometric errors.  

PubMed

The Feulgen-DNA contents of human leukocytes, sperm, and oral squames were investigated by scanning and integrating microdensitometry, both with and without correction for residual distribution error and glare. Maximally stained sperm had absorbances which at lambdamax exceeded the measuring range of the Vickers M86 microdensitometer; this potential source of error could be avoided either by using shorter hydrolysis times or by measuring at an off-peak wavelength. Small but statistically significant apparent differences between leukocyte types were found in uncorrected but not fully corrected measurements, and some apparent differences disappeared when only one of the residual instrumental errors was eliminated. In uncorrected measurements, the apparent Feulgen-DNA content of maximally stained polymorphs measured at lambdamax was significantly lower than that of squames, while in all experimental series uncorrected measurements showed apparent diploid:haploid ratios significantly greater than two. In fully corrected measurements no significant differences were found between leukocytes and squames, and in four independent estimations the lowest diploid:haploid ratio found was 1.99 +/- 0.05, and the highest 2.03 +/- 0.05. Discrepancies found in uncorrected measurements could be correlated with morphology of the nuclei concerned. Glare particularly affected measurements of relatively compact nuclei such as those of sperm, polymorphs and lymphocytes, while residual distribution error was especially marked with nuclei having a high perimeter:area ratio (e.g. sperm and polymorphs). Uncorrected instrumental errors, especially residual distribution error and glare, probably account for at least some of the previously reported apparent differences between the Feulgen-DNA contents of different cell types. On the basis of our experimental evidence, and a consideration of the published work of others, it appears that within the rather narrow limits of random experimental error there seems little or no reason to postulate either genuine differences in the amounts of DNA present in the cells studied, or nonstoichiometry of a correctly performed Feulgen reaction. PMID:61968

Bedi, K S; Goldstein, D J

1976-10-01

263

Potential Predictability of Summer Mean Precipitation in a Dynamical Seasonal Prediction System with Systematic Error Correction.  

NASA Astrophysics Data System (ADS)

Potential predictability of summer mean precipitation over the globe is investigated using data obtained from seasonal prediction experiments for 21 yr from 1979 to 1999 using the Korea Meteorological Administration Seoul National University (KMA SNU) seasonal prediction system. This experiment is a part of the Climate Variability and Predictability Program (CLIVAR) Seasonal Model Intercomparison Project II (SMIP II). The observed SSTs are used for the external boundary condition of the model integration; thus, the present study assesses the upper limit of predictability of the seasonal prediction system. The analysis shows that the tropical precipitation is largely controlled by the given SST condition and is thus predictable, particularly in the ENSO region. But the extratropical precipitation is less predictable due to the large contribution of the internal atmospheric processes to the seasonal mean. The systematic error of the ensemble mean prediction is particularly large in the subtropical western Pacific, where the air sea interaction is active and thus the two-tier approach of the present prediction experiment is not appropriate for correct predictions in the region.The statistical postprocessing method based on singular value decomposition corrects a large part of the systematic errors over the globe. In particular, about two-thirds of the total errors in the western Pacific are corrected by the postprocessing method. As a result, the potential predictability of the summer-mean precipitation is greatly enhanced over most of the globe by the statistical correction method; the 21-yr-averaged pattern-correlation value between the predictions and their observed counterparts is changed from 0.31 before the correction to 0.48 after the correction for the global domain and from 0.04 before the correction to 0.26 after the correction for the Asian monsoon and the western Pacific region.

Kang, In-Sik; Lee, June-Yi; Park, Chung-Kyu

2004-02-01

264

Effects of errors-in-variables on weighted least squares estimation  

NASA Astrophysics Data System (ADS)

Although total least squares (TLS) is more rigorous than the weighted least squares (LS) method to estimate the parameters in an errors-in-variables (EIV) model, it is computationally much more complicated than the weighted LS method. For some EIV problems, the TLS and weighted LS methods have been shown to produce practically negligible differences in the estimated parameters. To understand under what conditions we can safely use the usual weighted LS method, we systematically investigate the effects of the random errors of the design matrix on weighted LS adjustment. We derive the effects of EIV on the estimated quantities of geodetic interest, in particular, the model parameters, the variance-covariance matrix of the estimated parameters and the variance of unit weight. By simplifying our bias formulae, we can readily show that the corresponding statistical results obtained by Hodges and Moore (Appl Stat 21:185-195, 1972) and Davies and Hutton (Biometrika 62:383-391, 1975) are actually the special cases of our study. The theoretical analysis of bias has shown that the effect of random matrix on adjustment depends on the design matrix itself, the variance-covariance matrix of its elements and the model parameters. Using the derived formulae of bias, we can remove the effect of the random matrix from the weighted LS estimate and accordingly obtain the bias-corrected weighted LS estimate for the EIV model. We derive the bias of the weighted LS estimate of the variance of unit weight. The random errors of the design matrix can significantly affect the weighted LS estimate of the variance of unit weight. The theoretical analysis successfully explains all the anomalously large estimates of the variance of unit weight reported in the geodetic literature. We propose bias-corrected estimates for the variance of unit weight. Finally, we analyze two examples of coordinate transformation and climate change, which have shown that the bias-corrected weighted LS method can perform numerically as well as the weighted TLS method.

Xu, Peiliang; Liu, Jingnan; Zeng, Wenxian; Shen, Yunzhong

2014-04-01

265

A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations  

SciTech Connect

An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.

Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

1999-11-03

266

Global Error Estimation with One-Step Methods.  

National Technical Information Service (NTIS)

The global, or true, error made by one-step methods when solving the initial value problem for a system of ordinary differential equations is studied. Some methods for approximating this global error are based on an asymptotic global error expansion. A ne...

L. F. Shampine

1985-01-01

267

Application of statistical models to decomposition of systematic and random error in low-voltage SEM metrology  

NASA Astrophysics Data System (ADS)

Site-to-site LVSEM measurement data on insulating samples are affected in a systematic way by the number of measurements per site. The problem stems from the fact that repeated imaging at the same site does not produce true statistical replicates since the electron dose is cumulative. Indeed, the measurement values tend to grow or shrink in direct proportion to the total dose applied. The data support a model for linewidth as a function of electron dose that includes a linear term for systematic error and a reciprocal square root term as a scaling parameter for random error. We show that charging samples such a resist on oxide, where measurements are dominated by site-to-site variation in the systematic error, should be measured at low electron dose. Conversely, conducting samples such as polysilicon on oxide, where the measurements are dominated by random error, should be measured at relatively high electron dose.

Monahan, Kevin M.; Khalessi, Sadri

1992-06-01

268

An error analysis of the sensorless position estimation for BLDC motors  

Microsoft Academic Search

This paper presents a novel sensorless operation technique for brushless DC (BLDC) motors and a new approach of error analysis for the sensorless BLDC motor drive. The analysis breaks down the estimated position error to its fundamental elements in the BLDC motor drive system. The position estimation error decreases the motor efficiency and produces torque pulsation. In this paper, the

Tae-Hyung Kim; Mehrdad Ehsani

2003-01-01

269

A POSTERIORI ERROR ESTIMATES FOR THE FINITE ELEMENT APPROXIMATION OF EIGENVALUE PROBLEMS  

Microsoft Academic Search

This paper deals with a posteriori error estimators for the linear finite element approx- imation of second order elliptic eigenvalue problems in two or three dimensions. First, we give a simple proof of the equivalence, up to higher order terms, between the error and a residual type error estimator. Second, we prove that the volumetric part of the residual is

CLAUDIO PADRA; RODOLFO RODRIGUEZ

270

A posteriori finite element error estimation for second-order hyperbolic problems  

Microsoft Academic Search

We develop a posteriori finite element discretization error estimates for the wave equation. In one dimension, we show that the significant part of the spatial finite element error is proportional to a Lobatto polynomial and an error estimate is obtained by solving a set of either local elliptic or hyperbolic problems. In two dimensions, we show that the dichotomy principle

Slimane Adjerid

2002-01-01

271

Relative efficiency of three estimators in a polynomial regression with measurement errors  

Microsoft Academic Search

In a polynomial regression with measurement errors in the covariate, the latter being supposed to be normally distributed, one has (at least) three ways to estimate the unknown regression parameters: one can apply ordinary least squares (OLS) to the model without regard to the measurement error or one can correct for the measurement error, either by correcting the estimating equation

Alexander Kukush; Hans Schneeweiss; Roland Wolf

2005-01-01

272

Parameter Estimation In Ensemble Data Assimilation To Characterize Model Errors In Surface-Layer Schemes Over Complex Terrain  

NASA Astrophysics Data System (ADS)

Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.

Hacker, Joshua; Lee, Jared; Lei, Lili

2014-05-01

273

Measurement Error Webinar Series: Estimating usual intake distributions for multivariate dietary variables  

Cancer.gov

Identify challenges in addressing measurement error when modeling multivariate dietary variables such as diet quality indices. Describe statistical modeling techniques to correct for measurement error in estimating multivariate dietary variables.

274

Standard Error of Empirical Bayes Estimate in NONMEM(R) VI  

PubMed Central

The pharmacokinetics/pharmacodynamics analysis software NONMEM® output provides model parameter estimates and associated standard errors. However, the standard error of empirical Bayes estimates of inter-subject variability is not available. A simple and direct method for estimating standard error of the empirical Bayes estimates of inter-subject variability using the NONMEM® VI internal matrix POSTV is developed and applied to several pharmacokinetic models using intensively or sparsely sampled data for demonstration and to evaluate performance. The computed standard error is in general similar to the results from other post-processing methods and the degree of difference, if any, depends on the employed estimation options.

Kang, Dongwoo; Houk, Brett E.; Savic, Radojka M.; Karlsson, Mats O.

2012-01-01

275

Uncertainty quantification in a chemical system using error estimate-based mesh adaption  

Microsoft Academic Search

This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models.\\u000a The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements\\u000a context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the\\u000a local error estimate. The stochastic error estimate can then

Lionel Mathelin; Olivier P. Le Maître

2010-01-01

276

Estimate of higher order ionospheric errors in GNSS positioning  

Microsoft Academic Search

Precise navigation and positioning using GPS\\/GLONASS\\/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be

M. Mainul Hoque; N. Jakowski

2008-01-01

277

Evaluating concentration estimation errors in ELISA microarray experiments  

SciTech Connect

Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

2005-01-26

278

mBEEF: an accurate semi-local Bayesian error estimation density functional.  

PubMed

We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations. PMID:24735288

Wellendorff, Jess; Lundgaard, Keld T; Jacobsen, Karsten W; Bligaard, Thomas

2014-04-14

279

mBEEF: An accurate semi-local Bayesian error estimation density functional  

NASA Astrophysics Data System (ADS)

We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

2014-04-01

280

An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method  

NASA Technical Reports Server (NTRS)

State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

Frisbee, Joseph H., Jr.

2011-01-01

281

Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation  

Microsoft Academic Search

We construct a prediction rule on the basis of some data, and then wish to estimate the error rate of this rule in classifying future observations. Cross-validation provides a nearly unbiased estimate, using only the original data. Cross-validation turns out to be related closely to the bootstrap estimate of the error rate. This article has two purposes: to understand better

Bradley Efron

1983-01-01

282

Strategies for assessing diffusion anisotropy on the basis of magnetic resonance images: comparison of systematic errors.  

PubMed

Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

Boujraf, Saïd

2014-04-01

283

Adaptive subset offset for systematic error reduction in incremental digital image correlation  

NASA Astrophysics Data System (ADS)

Digital image correlation (DIC) relies on a high correlation between the intensities in the reference image and the target image. When decorrelation occurs due to large deformation or viewpoint change, incremental DIC is utilized to update the reference image and use the correspondences in this renewed image as the reference points in subsequent DIC computation. As each updated reference point is derived from previous correlation, its location is generally of sub-pixel accuracy. Conventional subset which is centered at the point results in subset points at non-integer positions. Therefore, the acquisition of the intensities of the subset demands interpolation which is proved to introduce additional systematic error. We hereby present adaptive subset offset to slightly translate the subset so that all the subset points fall on integer positions. By this means, interpolation in the updated reference image is totally avoided regardless of the non-integer locations of the reference points. The translation is determined according to the decimal of the reference point location, and the maximum are half a pixel in each direction. Such small translation has no negative effect on the compatibility of the widely used shape functions, correlation functions and the optimization algorithms. The results of the simulation and the real-world experiments show that adaptive subset offset produces lower measurement error than the conventional method in incremental DIC when applied in both 2D-DIC and 3D-DIC.

Zhou, Yihao; Sun, Chen; Chen, Jubing

2014-04-01

284

Estimate of the radial orbit error by complex demodulation  

Microsoft Academic Search

Complex demodulation of the sea surface heights computed from the altimeter range measurements provides an original way to characterize and empirically remove the radial orbit error in the spectral domain. The complex demodulation method is described and applied to a simulated series of altimeter data and to Geosat altimeter data. The results are interpreted in terms of gravity field error

O. Francis; M. Bergé

1993-01-01

285

Estimating Equating Error in Observed-Score Equating. Research Report.  

ERIC Educational Resources Information Center

Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and in the population of examinees. This definition underlies, for example, the well-known approximation to the standard error of equating by Lord (1982).…

van der Linden, Wim J.

286

Field evaluation of distance-estimation error during wetland-dependent bird surveys  

USGS Publications Warehouse

Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x?error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x?error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

Nadeau, Christopher P.; Conway, Courtney J.

2012-01-01

287

Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels  

SciTech Connect

Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)] [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States)] [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada)] [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States)] [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States)] [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)] [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

2013-11-15

288

Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska  

NASA Astrophysics Data System (ADS)

Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

Bonin, J. A.; Chambers, D. P.

2012-12-01

289

A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces  

SciTech Connect

In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

Ju, Lili [University of South Carolina; Tian, Li [University of South Carolina; Wang, Desheng [Nanyang Technological University

2009-01-01

290

Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors  

NASA Astrophysics Data System (ADS)

GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

2013-05-01

291

Speech enhancement using a minimum mean-square error log-spectral amplitude estimator  

Microsoft Academic Search

In this correspondence we derive a short-time spectral amplitude (STSA) estimator for speech signals which minimizes the mean-square error of the log-spectra (i.e., the original STSA and its estimator) and examine it in enhancing noisy speech. This estimator is also compared with the corresponding minimum mean-square error STSA estimator derived previously. It was found that the new estimator is very

Y. Ephraim; D. Malah

1985-01-01

292

Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.  

NASA Technical Reports Server (NTRS)

This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

2002-01-01

293

Output error estimation for summation-by-parts finite-difference schemes  

NASA Astrophysics Data System (ADS)

The paper develops a posteriori error estimates of integral output functionals for summation-by-parts finite-difference methods. The error estimates are based on the adjoint-weighted residual method and take advantage of a variational interpretation of summation-by-parts discretizations. The estimates are computed on a fixed grid and do not require an embedded grid or explicit interpolation operators. For smooth boundary-value problems containing first and second derivatives the error estimates converge to the exact error as the mesh is refined. The theory is verified using linear boundary-value problems and the Euler equations.

Hicken, J. E.

2012-05-01

294

Explicit residual-based a posteriori error estimation for finite element discretizations of the Helmholtz equation: Computation of the constant and new measures of error estimator quality  

Microsoft Academic Search

This paper continues the study of an explicit residual-based a posteriori error estimator developed for finite element discretizations of the Helmholtz equation, in the context of time-harmonic exterior acoustics problems. In previous papers [1–3] the error estimator was derived and subsequently applied to h-adaptive computations of model problems in two dimensions. In the present paper, a methodology is established for

James R. Stewart; Thomas J. R. Hughes

1996-01-01

295

Gas hydrate estimation error associated with uncertainties of measurements and parameters  

USGS Publications Warehouse

Downhole log measurements such as acoustic or electrical resistivity logs are often used to estimate in situ gas hydrate concentrations in sediment pore space. Estimation errors owing to uncertainties associated with downhole measurements and the parameters for estimation equations (weight in the acoustic method and Archie?s parameters in the resistivity method) are analyzed in order to assess the accuracy of estimation of gas hydrate concentration. Accurate downhole measurements are essential for accurate estimation of the gas hydrate concentrations in sediments, particularly at low gas hydrate concentrations and when using acoustic data. Estimation errors owing to measurement errors, except the slowness error, decrease as the gas hydrate concentration increases and as porosity increases. Estimation errors owing to uncertainty in the input parameters are small in the acoustic method and may be signifi cant in the resistivity method at low gas hydrate concentrations.

Lee, Myung W.; Collett, Timothy S.

2001-01-01

296

Hierarchical Finite Element Approaches Error Estimates and Adaptive Refinement.  

National Technical Information Service (NTIS)

This paper is concerned with the identification of the discretization error in finite element solution and the definition of optimal refinement processes. The advantages and limitations of the hierarchical approach will be discussed and it will be shown h...

O. C. Zienkiewicz D. W. Kelly J. Gago I. Babuska

1981-01-01

297

Aerial measurement error with a dot planimeter: Some experimental estimates  

NASA Technical Reports Server (NTRS)

A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

Yuill, R. S.

1971-01-01

298

A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem  

Microsoft Academic Search

Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained

Aurore Delaigle; Jianqing Fan; Raymond J. Carroll

2009-01-01

299

Integration of the variogram using spline functions for sampling error estimation  

Microsoft Academic Search

The component of the sampling error caused by taking discrete samples from a continuous process is the integration error, IE. This error can be estimated using P.M. Gy's variographic technique. This method involves the integration of the variogram. The variogram can be calculated from a time series of discrete samples. If the variogram is simple, it can be modelled and

Riitta Heikka; Pentti Minkkinen

1998-01-01

300

Position and speed sensorless control for PMSM drive using direct position error estimation  

Microsoft Academic Search

A new position and speed sensorless control approach is proposed for permanent magnet synchronous motor (PMSM) drives. The controller directly computes an error for the estimated rotor position and adjusts the speed according to this error. The derivation of the position error equation and an idea for eliminating the differential terms, are presented. The proposed approach is applied to a

Kiyoshi Sakamoto; Yoshitaka Iwaji; Tsunehiro Endo; T. Takakura

2001-01-01

301

Relative e%ciency of three estimators in a polynomial regression with measurement errors  

Microsoft Academic Search

Abstract In a polynomial regression with measurement errors in the covariate, the latter being supposed to be normally distributed, one has (at least) three ways to estimate the unknown regression parameters: one can apply ordinary least squares (OLS) to the model without regard to the mea- surement error or one can correct for the measurement error, either by correcting the

Alexander Kukush; Hans Schneeweiss; Roland Wolf

302

Spline Estimators of the Density Function of a Variable Measured with Error  

Microsoft Academic Search

The estimation of the distribution function of a random variable X measured with error is studied. It is assumed that the measurement error has a normal distribution with known parameters. Let the i-th observation on X be denoted by Yi=Xi+?i, where ?i is the measurement error. Let {Yi} ( i=1, 2, …, n) be a sample of independent observations. It

Cong Chen; Wayne A. Fuller; F. Jay Breidt

2003-01-01

303

An estimate of asthma prevalence in Africa: a systematic analysis  

PubMed Central

Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge.

Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

2013-01-01

304

Offline parameter estimation using EnKF and maximum-likelihood error covariance estimates  

NASA Astrophysics Data System (ADS)

Parameterizations of physical processes represent an important source of uncertainty in climate models. These processes are governed by physical parameters and most of them are unknown and generally manually tuned. This subjective approach is excessively time demanding and gives inefficient results due to flow dependency of the parameters and potential correlations between each other. Moreover, in case of changes in horizontal resolution or parameterization scheme, the physical parameters need to be completely re-evaluated. To overcome these limitations, recent works proposed to estimate the physical parameters objectively using filtering and inverse techniques. In this presentation, we investigate this way and propose a novel offline parameter estimation approach. More precisely, we build a nonlinear state-space model resolved into a EnKF (Ensemble Kalman Filter) framework where (i) the state of the system corresponds to the unknown physical parameters, (ii) the state evolution is driven as a Gaussian random walk, (iii) the observation operator is the physical process and (iv) observations are perturbed realizations of this physical process with a given set of physical parameters. Then, we use an iterative maximum-likelihood estimation of the error covariance matrices and the first guess or background state of the EnKF. Among the error covariance matrices, we estimate the one for the state equation (Q) and the observation equation (R) respectively to keep into account correlations between physical parameters and the flow dependency of the parameters. The proper estimation of covariances instead of arbitrarily prescribing them and estimate inflation factors ensures the convergence to the optimal physical parameters. The proposed technique is implemented and used to estimate parameters from the subgrid-scale orography scheme implemented in the ECMWF (European Centre for Medium-Range Weather Forecasts) and LMDZ (Laboratoire de Météorologie Dynamique Zoom) models. Using a twin expriment, we demonstrate that our parameter estimation technique is relevant and outperforms the results with the classical EnKF implementation. Moreover, the technique is flexible and could be used in online physical parameter estimations.

Tandeo, Pierre; Pulido, Manuel

2013-04-01

305

Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean  

NASA Technical Reports Server (NTRS)

One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.

Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

2004-01-01

306

The Asymptotic Standard Errors of Some Estimates of Uncertainty in the Two-Way Contingency Table  

ERIC Educational Resources Information Center

Estimates of conditional uncertainty, contingent uncertainty, and normed modifications of contingent uncertainity have been proposed for the two-way contingency table. The asymptotic standard errors of the estimates are derived. (Author)

Brown, Morton B.

1975-01-01

307

Systematic residual ionospheric errors in radio occultation data and a potential way to minimize them  

NASA Astrophysics Data System (ADS)

Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 yr, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

2013-02-01

308

Systematic residual ionospheric errors in radio occultation data and a potential way to minimize them  

NASA Astrophysics Data System (ADS)

Radio occultation (RO) sensing is used to probe the earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25-30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem, we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001-2011. We could observe that the nighttime bending angle bias stays constant over the whole period of 11 yr, while the daytime bias increases from low to high solar activity. As a result, the difference between nighttime and daytime bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of daytime RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this simulation we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

2013-08-01

309

Systematic Residual Ionospheric Errors in Radio Occultation Data and a Potential Way to Minimize them  

NASA Astrophysics Data System (ADS)

Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05?rad to -0.4?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, Julia; Scherllin-Pirscher, Barbara; Foelsche, Ulrich

2013-04-01

310

Interventions to reduce wrong blood in tube errors in transfusion: a systematic review.  

PubMed

This systematic review addresses the issue of wrong blood in tube (WBIT). The objective was to identify interventions that have been implemented and the effectiveness of these interventions to reduce WBIT incidence in red blood cell transfusion. Eligible articles were identified through a comprehensive search of The Cochrane Library, MEDLINE, EMBASE, Cinahl, BNID, and the Transfusion Evidence Library to April 2013. Initial search criteria were wide including primary intervention or observational studies, case reports, expert opinion, and guidelines. There was no restriction by study type, language, or status. Publications before 1995, reviews or reports of a secondary nature, studies of sampling errors outwith transfusion, and articles involving animals were excluded. The primary outcome was a reduction in errors. Study characteristics, outcomes measured, and methodological quality were extracted by 2 authors independently. The principal method of analysis was descriptive. A total of 12,703 references were initially identified. Preliminary secondary screening by 2 reviewers reduced articles for detailed screening to 128 articles. Eleven articles were eventually identified as eligible, resulting in 9 independent studies being included in the review. The overall finding was that all the identified interventions reduced WBIT incidence. Five studies measured the effect of a single intervention, for example, changes to blood sample labeling, weekly feedback, handwritten transfusion requests, and an electronic transfusion system. Four studies reported multiple interventions including education, second check of ID at sampling, and confirmatory sampling. It was not clear which intervention was the most effective. Sustainability of the effectiveness of interventions was also unclear. Targeted interventions, either single or multiple, can lead to a reduction in WBIT; but the sustainability of effectiveness is uncertain. Data on the pre- and postimplementation of interventions need to be collected in future trials to demonstrate effectiveness, and comparative studies are needed of different interventions. PMID:24075096

Cottrell, Susan; Watson, Douglas; Eyre, Toby A; Brunskill, Susan J; Dorée, Carolyn; Murphy, Michael F

2013-10-01

311

Human Errors in Medical Practice: Systematic Classification and Reduction with Automated Information Systems  

Microsoft Academic Search

We review the general nature of human error(s) in complex systems and then focus on issues raised by Institute of Medicine report in 1999. From this background we classify and categorize error(s) in medical practice, including medication, procedures, diagnosis, and clerical error(s). We also review the potential role of software and technology applications in reducing the rate and nature of

D. Kopec; M. H. Kabir; D. Reinharth; O. Rothschild; J. A. Castiglione

2003-01-01

312

New error bounds for M-testing and estimation of source location with subdiffractive error.  

PubMed

I present new lower and upper bounds on the minimum probability of error (MPE) in Bayesian multihypothesis testing that follow from an exact integral of a version of the statistical entropy of the posterior distribution, or equivocation. I also show that these bounds are exponentially tight and thus achievable in the asymptotic limit of many conditionally independent and identically distributed measurements. I then relate the minimum mean-squared error (MMSE) and the MPE by means of certain elementary error probability integrals. In the second half of the paper, I compare the MPE and MMSE for the problem of locating a single point source with subdiffractive uncertainty. The source-strength threshold needed to achieve a desired degree of source localization seems to be far more modest than the well established threshold for the different optical super-resolution problem of disambiguating two point sources with subdiffractive separation. PMID:22472767

Prasad, Sudhakar

2012-03-01

313

Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors  

NASA Astrophysics Data System (ADS)

Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.

Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice

2014-03-01

314

The Effect of Retrospective Sampling on Estimates of Prediction Error for Multifactor Dimensionality Reduction  

PubMed Central

SUMMARY The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.

Winham, Stacey J.; Motsinger-Reif, Alison A.

2010-01-01

315

Performance of diversity schemes for OFDM systems with frequency offset, phase noise, and channel estimation errors  

Microsoft Academic Search

We provide expressions for the bit error rate of various transmit and receive diversity schemes for orthogonal frequency division multiplexing (OFDM) systems in the presence of frequency offset, phase noise, and channel estimation errors. The derivations are also applicable for a general multiplicative distortion of the received signal. Our results show that with perfect channel estimates, practical values of the

Ravi Narasimhan

2002-01-01

316

An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows  

Microsoft Academic Search

In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori

Heng Wu

2000-01-01

317

An asymptotically exact finite element error estimator based on C 1 stress recovery  

Microsoft Academic Search

A posteriori finite element error estimation for planar linear elasticity problems is developed using a recovery operator based on a C1 stress smoothing technique developed by Tessler, Riggs and Macy. The error estimator that is developed is proved to be asymptotically exact under reasonable regularity assumptions on the mesh and the solution. Numerical results for a typical plane stress problem

Shirley B. Pomeranz; Adrian W. Kirk; William D. Baker

1998-01-01

318

A posteriori error estimates of mixed DG finite element methods for linear parabolic equations  

Microsoft Academic Search

In this article, we analyse a posteriori error estimates of mixed finite element discretizations for linear parabolic equations. The space discretization is done using the order ????1 Raviart–Thomas mixed finite elements, whereas the time discretization is based on discontinuous Galerkin (DG) methods (r???1). Using the duality argument, we derive a posteriori l (L ) error estimates for the scalar function,

Tianliang Hou

2012-01-01

319

A Posteriori Error Estimate for Finite Volume Approximations of Convection Diffusion Problems  

Microsoft Academic Search

We deduce an a posteriori error estimate for cell centered finite volume approximations of nonlinear degenerate parabolic equations. The error estimator is robust with respect to the diffusion coefficient (which may tend to 0) and is applicable either in the case of diffusion dominated or in the case of convection dominated solutions.

Raphaèle Herbin; Mario Ohlberger

2002-01-01

320

A posteriori error estimates for mixed finite element solutions of convex optimal control problems  

Microsoft Academic Search

In this paper, we present an a posteriori error analysis for mixed finite element approximation of convex optimal control problems. We derive a posteriori error estimates for the coupled state and control approximations under some assumptions which hold in many applications. Such estimates can be used to construct reliable adaptive mixed finite elements for the control problems.

Yanping Chen; Wenbin Liu

2008-01-01

321

Goal-oriented A Posteriori Error Estimation for Finite Volume Methods  

Microsoft Academic Search

A general framework for goal-oriented a posteriori error estimation for finite volume methods is presented. The framework does not rely on recasting finite volume methods as special cases of finite element methods, but instead directly determines error estimators from the discretized finite volume equations. Thus, the framework can be ap- plied to arbitrary finite volume methods. It also provides the

Qingshan Chen; Max Gunzburger

2010-01-01

322

Application of a posteriori error estimation to finite element simulation of incompressible Navier–Stokes flow  

Microsoft Academic Search

The main goal of this paper is to study adaptive mesh techniques, using a posteriori error estimates, for the finite element solution of the Navier–Stokes equations modeling steady and unsteady flows of an incompressible viscous fluid. Among existing operator splitting techniques, the ?-scheme is used for time integration of the Navier–Stokes equations. Then, a posteriori error estimates, based on the

Jun Cao

2005-01-01

323

Residual and hierarchical a posteriori error estimates for nonconforming mixed finite element methods  

Microsoft Academic Search

We analyze residual and hierarchical a posteriori error estimates for nonconforming finite element approximations of elliptic problems with variable coefficients. We consider a finite volume box scheme equivalent to a nonconforming mixed finite element method in a Petrov-Galerkin setting. We prove that all the estimators yield global upper and local lower bounds for the discretization error. Finally, we present results

Linda El Alaoui; Alexandre Ern

2004-01-01

324

A new posteriori error estimation concept for three-dimensional finite element solution  

Microsoft Academic Search

The three-dimensional finite element method (FEM) has been widely applied for the practical analysis of electromagnetic fields. However, numerical error estimation methods which focus on the governing equations of electromagnetic fields, not FEM formulations, have been reported in few papers. This paper proposes a new posteriori error estimation concept based on Ampere's circuital law which is one of the basic

Koichi Koibuchi; Koichiro Sawa

1999-01-01

325

The effect of transfer function estimation errors on the filtered-x LMS algorithm  

Microsoft Academic Search

The filtered-x LMS algorithm, which is commonly implemented in active noise and vibration control systems, requires an estimate of the cancellation path transfer function to maintain algorithm stability. This correspondence considers the effect of errors in this estimate on the stability of the algorithm implemented in the time domain, concluding that while a maximum phase error of ±90° is a

S. D. Snyder; C. H. Hansen

1994-01-01

326

An improved variational method for finite element stress recovery and a posteriori error estimation  

Microsoft Academic Search

A new variational formulation is presented which serves as a foundation for an improved finite element stress recovery and a posteriori error estimation. In the case of stress predictions, interelement discontinuous stress fields from finite element solutions are transformed into a C1-continuous stress field with C0-continuous stress gradients. These enhanced results are ideally suited for error estimation since the stress

Alexander Tessler; H. Ronald Riggs; Colin E. Freese; Geoffrey M. Cook

1998-01-01

327

Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers  

ERIC Educational Resources Information Center

Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

Kim, ChangHwan; Tamborini, Christopher R.

2012-01-01

328

On the effect of estimating the error density in nonparametric deconvolution  

Microsoft Academic Search

It is quite common in the statistical literature on nonparametric deconvolution to assume that the error density is perfectly known. Since this seems to be unrealistic in many practical applications, we study the effect of estimating the unknown error density. We derive minimax rates of convergence and propose a modification of the usual kernel-based estimation scheme, which takes the uncertainty

Michael H. Neumann; O. Hössjer

1997-01-01

329

Wind Speed Forecasting Based on Support Vector Machine with Forecasting Error Estimation  

Microsoft Academic Search

An approach of a mean hourly wind speed forecasting in wind farm is proposed in this paper. It applies support vector regression as well as forecasting error estimation. Firstly, support vector regression is applied to the mean hourly wind speed forecasting. Secondly, a support vector classifier is trained to estimate the forecasting error. Finally, the forecasting results can tailor themselves

Guo-Rui Ji; Pu Han; Yong-Jie Zhai

2007-01-01

330

Hierarchical A Posteriori Error Estimators For Mortar Finite Element Methods With Lagrange Multipliers  

Microsoft Academic Search

. Hierarchical a posteriori error estimators are introduced and analyzed for mortar finite elementmethods. A weak continuity condition at the interfaces is enforced by means of Lagrange multipliers. The twoproposed error estimators are based on a defect correction in higher order finite element spaces and an adequatehierarchical two-level splitting. The first provides upper and lower bounds for the discrete energy

Barbara I. Wohlmuth

1997-01-01

331

Cross-Validation, the Jackknife, and the Bootstrap: Excess Error Estimation in Forward Logistic Regression  

Microsoft Academic Search

Given a prediction rule based on a set of patients, what is the probability of incorrectly predicting the outcome of a new patient? Call this probability the true error. An optimistic estimate is the apparent error, or the proportion of incorrect predictions on the original set of patients, and it is the goal of this article to study estimates of

Gail Gong

1986-01-01

332

Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife  

ERIC Educational Resources Information Center

The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

Jennrich, Robert I.

2008-01-01

333

Error estimation in fitting X-ray spectrum by nonlinear least-squares method  

Microsoft Academic Search

The method to estimate relative errors without calculating the error matrix has been tested for the case of the parameters of the shape function fitted to a K X-ray spectrum recorded with a Si(Li) detector. Using the parabolic dependence of the chi-square values on the parameters, it is shown that the error estimation can be easily made for the complicated

Y. Watanabe; T. Kubozoe; T. Mukoyama

1989-01-01

334

A posteriori finite element error estimators for parametrized nonlinear boundary value problems  

Microsoft Academic Search

A posteriori error estimators for finite element solutions of multi—parameter nonlinear partial differential equations are based on an element—by—element solution of local linearizations of the nonlinear equation. In general, the associated bilinear form of the linearized Problems satisfies a Gårding—type inequality. Under appropriate assumption it is shown that the error estimators are bounded by constant multiples of the true error

Werner C. Rheinboldt

1996-01-01

335

p-Version FEM for structural acoustics with a posteriori error estimation  

Microsoft Academic Search

We demonstrate the advantages of using p-version finite element approximations for structural-acoustics problems in the mid-to-high frequency regime. We then present a sub-domain-based a posteriori error estimation procedure to quantify the errors in the setting of a 3D interior-acoustics problem with resonances, and give numerical results. Effectivity indices show robust behavior of the error estimator away from the resonant frequencies.

S. Dey; D. K. Datta; J. J. Shirron; M. S. Shephard

2006-01-01

336

Assessment of random and systematic errors in millimeter-wave dielectric measurement using open resonator and Fourier transform spectroscopy systems  

Microsoft Academic Search

Assessment of random and systematic errors is performed for the first time on the real part of permittivity (?') and loss tangent (tan?) of ceramics and polymers using two different measurement systems. Data measured from the full cavity length and the frequency variation techniques using the 60 GHz open resonator system and the millimeter-wave dispersive Fourier transform spectroscopy system (DFTS)

Mohammed N. Afsar; Anusha Moonshiram; Yong Wang

2004-01-01

337

UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS  

EPA Science Inventory

Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

338

Systematic set-up errors for IMRT in the head and neck region: effect on dose distribution  

Microsoft Academic Search

Background and purpose: There is a general concern about intensity modulated radiation therapy (IMRT) treatments being more sensitive to patient positioning than conventional treatments. The aim of this study was to evaluate the International Commission on radiation units and measurements (ICRU) method for taking systematic set-up errors into account for IMRT treatments and to compare the effects on the dose

Anna Samuelsson; Claes Mercke; Karl-Axel Johansson

2003-01-01

339

Effect of diagnostic testing error on intracluster correlation coefficient estimation  

Microsoft Academic Search

Estimation of the intracluster correlation coefficient (ICC) for infectious animal diseases may be of interest for survey planning and for calculating variance inflation factors for estimators of prevalence. Typically, diagnostic tests with imperfect sensitivity and specificity are used in surveys. In such studies, where animals from multiple herds are tested, the ICC often is estimated using apparent (test-based) rather than

A. J. Branscum; I. A. Gardner; B. A. Wagner; P. S. McInturff; M. D. Salman

2005-01-01

340

Systematic Error Studies for a Measurement of the Beta Asymmetry Parameter Using Ultracold Neutrons  

NASA Astrophysics Data System (ADS)

The angular correlation between a neutron's spin and the initial direction of its emitted electron when undergoing beta decay is known as the beta asymmetry parameter. This parameter can be used to help search for physics beyond the Standard Model by determining the value of the up and down quark weak mixing angle and testing the unitarity of the Cabibbo-Kobayashi-Maskawa matrix. The UCNA experiment at the Los Alamos Neutron Science Center seeks to obtain a precise measurement of the neutron beta asymmetry by studying the beta decay of a dense population of identically-polarized ultracold neutrons (UCN.) The UCN are produced in a solid deuterium source and are polarized by a static magnetic field. Electrons emitted from the beta decay of UCN travel outwards along solenoidal field lines towards one of two oppositely-placed units each made up of a multi-wire proportional chamber backed by a plastic scintillator detector. Accurate determination of the events in these detection units is essential to a high-quality measurement of the expected directional asymmetry. The results of systematic error studies of recent UCNA data are presented.

Russell, Rebecca

2009-10-01

341

Estimating friction errors in MOC analyses of unsteady pipe flows  

Microsoft Academic Search

Errors arising from the numerical treatment of friction in unsteady flows in small pipe networks are assessed for fixed-grid, method of characteristics analyses (MOC) with no interpolation. Although the results of the study are targeted at general unsteady flows, the underpinning analytical development is based on the behaviour of standing waves. This enables quantitative conclusions to be drawn about the

Masashi Shimada; Jim Brown; Alan Vardy

2007-01-01

342

A theory to correct the systematic error caused by the imperfectly matched beam width to vessel diameter ratio on volumetric flow measurements using ultrasound techniques.  

PubMed

When a multigate procedure is used to measure volumetric flow in vessels, in addition to the flow rate result obtained from the conventional velocity profile method, a second result from an "average velocity profile method" can be obtained simultaneously. The latter method obtains the flow rate from the product of the average velocity across the profile and the cross-sectional area of the vessel. A theoretical model has been used to study the effect of the beam width to vessel diameter ratio (BWR) on these two results, as well as a third flow rate result obtained from the uniform insonation method. A theory has been established to correct the systematic error caused by the imperfectly matched BWR associated with each method. It uses a correction factor and the difference between the results from the average velocity profile method and the velocity profile method to compensate for the systematic error. The relationship between an optimal correction factor and the BWR under different flow conditions has been studied. The results using the correction theory in this model show that if the estimated BWR is within +/- 0.1 from the actual BWR value, the theoretical error in volumetric flow estimation can be limited to within 6.5% for the entire range of BWR. PMID:8553499

Fei, D Y

1995-01-01

343

Systematic error arising from 'sequential' standard addition calibrations. 2. Determination of analyte mass fraction in blank solutions.  

PubMed

The use of a sequential standard addition calibration (S-SAC) can introduce systematic errors into measurements results. Whilst this error for the determination of blank-corrected solutions has previously been described, no similar treatment has been available for the quantification of analyte mass fraction in blank solutions--a crucial first step in any analytical procedure. This paper presents the theory describing the measurement of blank solutions using S-SAC, derives the correction that needs to be applied following analysis, and demonstrates the systematic error that occurs if this correction is not applied. The relative magnitudes of this bias and the precision of extrapolated measurements values are also considered. PMID:19646577

Brown, Richard J C

2009-08-26

344

Psychological scaling of expert estimates of human error probabilities: application to nuclear power plant operation  

Microsoft Academic Search

The US Nuclear Regulatory Commission and Sandia National Laboratories sponsored a project to evaluate psychological scaling techniques for use in generating estimates of human error probabilities. The project evaluated two techniques: direct numerical estimation and paired comparisons. Expert estimates were found to be consistent across and within judges. Convergent validity was good, in comparison to estimates in a handbook of

K. Comer; C. D. Gaddy; D. A. Seaver; W. G. Stillwell

1985-01-01

345

An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.  

ERIC Educational Resources Information Center

Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

De Ayala, R. J.; And Others

346

Stochastic Parameter Estimation Procedures for Hydrologie Rainfall-Runoff Models: Correlated and Heteroscedastic Error Cases  

NASA Astrophysics Data System (ADS)

A maximum likelihood estimation procedure is presented through which two aspects of the streamflow measurement errors of the calibration phase are accounted for. First, the correlated error case is considered where a first-order autoregressive scheme is presupposed for the additive errors. This proposed procedure first determines the anticipated correlation coefficient of the errors and then uses it in the objective function to estimate the best values of the model parameters. Second, the heteroscedastic error case (changing variance) is considered for which a weighting approach, using the concept of power transformation, is developed. The performances of the new procedures are tested with synthetic data for various error conditions on a two-parameter model. In comparison with the simple least squares criterion and the weighted least squares scheme of the HEC-1 of the U.S. Army Corps of Engineers for the heteroschedastic case, the new procedures constantly produced better estimates. The procedures were found to be easy to implement with no convergence problem. In the absence of correlated errors, as theoretically expected, the correlated error procedure produces the exact same estimates as the simple least squares criterion. Likewise, the self-correcting ability of the heteroschedastic error procedure was effective in reducing the objective function to that of the simple least squares as data gradually became homoscedastic. Finally, the effective residual tests for detection of the above-mentioned error situations are discussed.

Sorooshian, Soroosh; Dracup, John A.

1980-04-01

347

The White Dwarf Luminosity Function: Measurement Errors and Estimators  

NASA Astrophysics Data System (ADS)

The white dwarf luminosity function is an important tool for the study of the solar neighborhood, since it allows the determination of a wide range of galactic parameters, the age of the Galactic disk being the most important one. However, the white dwarf luminosity function is not free of biases induced by the measurement errors, sampling biases, the Lutz--Kelker bias or even the contamination of two, or more, kinematic populations --- like the thick disk or the halo populations. We have used a Monte Carlo simulator to generate a controlled synthetic population of disk white dwarfs and we analyze the behavior of the 1/Vmax method and of the Choloniewski method for some reasonable assumptions about the measurement errors and the contamination of the sample by the halo white dwarf population.

Torres, S.; García-Berro, E.; Geijo, E. M.; Isern, J.

2007-09-01

348

Blind estimation of timing errors in interleaved AD converters  

Microsoft Academic Search

Parallel A\\/D converter structures is one way to increase the sampling rate. Instead of increasing the sample rate in one A\\/D converter, several A\\/D converters with lower sampling rate can be used instead. A problem in these structures is that the time between samples is usually not equal because there are errors in the delays between the A\\/D converters. We

J. Elbornsson; J.-E. Eklund

2001-01-01

349

Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery  

NASA Technical Reports Server (NTRS)

Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

Martin, D. L.; Perry, M. J.

1994-01-01

350

Motion and Structure From Two Perspective Views: Algorithms, Error Analysis, and Error Estimation  

Microsoft Academic Search

Deals with estimating motion parameters and the structure of the scene from point (or feature) correspondences between two perspective views. An algorithm is presented that gives a closed-form solution for motion parameters and the structure of the scene. The algorithm utilizes redundancy in the data to obtain more reliable estimates in the presence of noise. An approach is introduced to

Juyang Weng; Thomas S. Huang; Narendra Ahuja

1989-01-01

351

DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS  

SciTech Connect

We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

Giuppone, C. A.; Beauge, C. [Observatorio Astronomico, Universidad Nacional de Cordoba, Cordoba (Argentina); Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A. [Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Sao Paulo (Brazil)

2009-07-10

352

Empirical Estimation of Standard Errors of Compensatory MIRT Model Parameters Obtained from the NOHARM Estimation Program. ACT Research Report Series.  

ERIC Educational Resources Information Center

Two studies were carried out to evaluate the quality of multidimensional item response theory (MIRT) model parameter estimates obtained from the computer program NOHARM. The purpose of the first study was to compute empirical estimates of the standard errors of the parameters. In addition, the parameter estimates were evaluated for bias and the…

Miller, Timothy R.

353

A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.  

ERIC Educational Resources Information Center

Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

Shoemaker, David M.

354

Implicit Polynomial Representation Through a Fast Fitting Error Estimation  

Microsoft Academic Search

This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of

Mohammad Rouhani; Angel Domingo Sappa

2012-01-01

355

Space-Time Error Representation and Estimation in Navier-Stokes Calculations  

NASA Technical Reports Server (NTRS)

The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

Barth, Timothy J.

2006-01-01

356

Effect of diagnostic testing error on intracluster correlation coefficient estimation.  

PubMed

Estimation of the intracluster correlation coefficient (ICC) for infectious animal diseases may be of interest for survey planning and for calculating variance inflation factors for estimators of prevalence. Typically, diagnostic tests with imperfect sensitivity and specificity are used in surveys. In such studies, where animals from multiple herds are tested, the ICC often is estimated using apparent (test-based) rather than true prevalence data. Through Monte Carlo simulation, we examined the effect of substituting diagnostic test outcomes for true infection status on an ANOVA estimator of ICC, which was designed for use with true infection status data. We considered effects of diagnostic test sensitivity and specificity on the estimated ICC when the true ICC value and infection status of the sampled individuals were known. The ANOVA estimator underestimated the true ICC when the diagnostic test was imperfect. We also demonstrated, under the beta-binomial model, that the ICC based on apparent infection status for individuals is < or = ICC based on true infection status. In addition, we propose a Bayesian model for estimating the ICC that incorporates imperfect sensitivity and specificity and illustrate the Bayesian model using a simulation study and one example; a seroprevalence survey of ovine progressive pneumonia in U.S. sheep flocks. PMID:15899297

Branscum, A J; Gardner, I A; Wagner, B A; McInturff, P S; Salman, M D

2005-06-10

357

Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE  

NASA Technical Reports Server (NTRS)

Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

2011-01-01

358

Estimating recharge rate from groundwater age using a simplified analytical approach: Applicability and error estimation in heterogeneous porous media  

NASA Astrophysics Data System (ADS)

High-K lenses can induce complex groundwater age distributions.Quantifying recharge rate error requires a priori knowledge of heterogeneity.The average of multiple unbiased samples may provide a reasonable estimate.

Kozuskanich, John; Simmons, Craig T.; Cook, Peter G.

2014-04-01

359

Effect of calibration errors on Bayesian parameter estimation for gravitational wave signals from inspiral binary systems in the advanced detectors era  

NASA Astrophysics Data System (ADS)

By 2015 the advanced versions of the gravitational-wave detectors Virgo and LIGO will be online. They will collect data in coincidence with enough sensitivity to potentially deliver multiple detections of gravitation waves from inspirals of compact-object binaries. This work is focused on understanding the effects introduced by uncertainties in the calibration of the interferometers. We consider plausible calibration errors based on estimates obtained during LIGO’s fifth and Virgo’s third science runs, which include frequency-dependent amplitude errors of ˜10% and frequency-dependent phase errors of ˜3degrees in each instrument. We quantify the consequences of such errors estimating the parameters of inspiraling binaries. We find that the systematics introduced by calibration errors on the inferred values of the chirp mass and mass ratio are smaller than 20% of the statistical measurement uncertainties in parameter estimation for 90% of signals in our mock catalog. Meanwhile, the calibration-induced systematics in the inferred sky location of the signal are smaller than ˜50% of the statistical uncertainty. We thus conclude that calibration-induced errors at this level are not a significant detriment to accurate parameter estimation.

Vitale, Salvatore; Del Pozzo, Walter; Li, Tjonnie G. F.; Van Den Broeck, Chris; Mandel, Ilya; Aylott, Ben; Veitch, John

2012-03-01

360

Superconvergence and a posteriori error estimation fortriangular mixed finite elements  

Microsoft Academic Search

Summary.  In this paper,we prove superconvergence results for the vector\\u000a variable when lowest order triangular mixed finite elements of\\u000a Raviart-Thomas type [17] on uniform triangulations are used,\\u000a i.e., that the -distance between the\\u000a approximate solution and a suitable projection of the real solution\\u000a is of higher order than the -error. We\\u000a prove\\u000a results for both Dirichlet and Neumann boundary conditions. Recently,

Jan H. Brandts

1994-01-01

361

A posteriori error estimates for the Johnson-N?d?lec FEM-BEM coupling  

PubMed Central

Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h?h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches.

Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

2012-01-01

362

Error estimation and adaptive mesh refinement for parallel analysis of shell structures  

NASA Technical Reports Server (NTRS)

The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

1994-01-01

363

A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint  

NASA Technical Reports Server (NTRS)

This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

Barth, Timothy

2004-01-01

364

Reduction of systematic errors in regional climate simulations of the summer monsoon over East Asia and the western North Pacific by applying the spectral nudging technique  

NASA Astrophysics Data System (ADS)

In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.

Cha, Dong-Hyun; Lee, Dong-Kyou

2009-07-01

365

State and model error estimation for distributed parameter systems. [in large space structure control  

NASA Technical Reports Server (NTRS)

In-flight estimation of large structure model errors in order to detect inevitable deficiencies in large structure controller/estimator models is discussed. Such an estimation process is particularly applicable in the area of shape control system design required to maintain a prescribed static structural shape and, in addition, suppress dynamic disturbances due to the vehicle vibrational modes. The paper outlines a solution to the problem of static shape estimation where the vehicle shape must be reconstructed from a set of measurements discretely located throughout the structure. The estimation process is based on the principle of least-squares that inherently contains the definition and explicit computation of model error estimates that are optimal in some sense. Consequently, a solution is provided for the problem of estimation of static model errors (e.g., external loads). A generalized formulation applicable to distributed parameters systems is first worked out and then applied to a one-dimensional beam-like structural configuration.

Rodriguez, G.

1979-01-01

366

A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem.  

PubMed

Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions. PMID:20351800

Delaigle, Aurore; Fan, Jianqing; Carroll, Raymond J

2009-03-01

367

Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems  

NASA Technical Reports Server (NTRS)

Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

Harwit, M.

1977-01-01

368

A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates  

NASA Astrophysics Data System (ADS)

A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

Huang, Weizhang; Kamenski, Lennard; Lang, Jens

2010-03-01

369

A Posteriori Error Estimation for Finite Volume and Finite Element Approximations Using Broken Space Approximation  

NASA Technical Reports Server (NTRS)

We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.

Barth, Timothy J.; Larson, Mats G.

2000-01-01

370

On residual-based a posteriori error estimation in hp-FEM  

Microsoft Academic Search

A family , 2 [0; 1], of residual-based error indicators for the hp-version ofthenite element method is presented and analyzed. Upper and lower bounds forthe error indicators are established. To do so, the well-known Clement\\/ScottZhanginterpolation operator is generalized to the hp-context and new polynomialinverse estimates are presented. An hp-adaptive strategy is proposed. Numericalexamples illustrate the performance of the error indicators

Jens Markus Melenk; Barbara I. Wohlmuth

2001-01-01

371

A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems  

NASA Technical Reports Server (NTRS)

This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

Larson, Mats G.; Barth, Timothy J.

1999-01-01

372

Errors of level in spinal surgery: an evidence-based systematic review.  

PubMed

Wrong-level surgery is a unique pitfall in spinal surgery and is part of the wider field of wrong-site surgery. Wrong-site surgery affects both patients and surgeons and has received much media attention. We performed this systematic review to determine the incidence and prevalence of wrong-level procedures in spinal surgery and to identify effective prevention strategies. We retrieved 12 studies reporting the incidence or prevalence of wrong-site surgery and that provided information about prevention strategies. Of these, ten studies were performed on patients undergoing lumbar spine surgery and two on patients undergoing lumbar, thoracic or cervical spine procedures. A higher frequency of wrong-level surgery in lumbar procedures than in cervical procedures was found. Only one study assessed preventative strategies for wrong-site surgery, demonstrating that current site-verification protocols did not prevent about one-third of the cases. The current literature does not provide a definitive estimate of the occurrence of wrong-site spinal surgery, and there is no published evidence to support the effectiveness of site-verification protocols. Further prevention strategies need to be developed to reduce the risk of wrong-site surgery. PMID:23109637

Longo, U G; Loppini, M; Romeo, G; Maffulli, N; Denaro, V

2012-11-01

373

The Consistency between the Empirical and the Analytical Standard Errors of Multidimensional IRT Item Estimates.  

ERIC Educational Resources Information Center

An evaluation of the variation of item estimates was conducted for the multidimensional extension of the logistic item response theory (MIRT) model. The empirically determined standard errors (SEs) of marginal maximum likelihood estimation (MMLE)/Bayesian item estimates from 40 items from the ACT Assessment (Form 24b, 1985) were obtained when the…

Li, Yuan H.; Yang, Yu N.

374

Analytic Estimation of Standard Error and Confidence Interval for Scale Reliability.  

ERIC Educational Resources Information Center

Proposes an analytic approach to standard error and confidence interval estimation of scale reliability with fixed congeneric measures. The method is based on a generally applicable estimator stability evaluation procedure, the delta method. The approach, which combines wide-spread point estimation of composite reliability in behavioral scale…

Raykov, Tenko

2002-01-01

375

The estimation error covariance matrix for the ideal state reconstructor with measurement noise  

NASA Technical Reports Server (NTRS)

A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

Polites, Michael E.

1988-01-01

376

A residual a posteriori error estimator for the finite element solution of the Helmholtz equation  

Microsoft Academic Search

This paper suggests an expression of a residual-based a posteriori error estimation for a Galerkin finite element discretisation of the Helmholtz operator applied in acoustics. In the first part, we summarise the finite element approach and the a priori estimates. The new residual estimator is then formulated, illustrated and tested on two one-dimensional model problems with closed form solution.

S. Irimie; Ph. Bouillard

2001-01-01

377

A posteriori error estimation of the finite element solutions for elliptic double obstacle problems  

Microsoft Academic Search

A posteriori estimation of the finite element solutions for double obstacle problems is obtained. There has been a well-established theory and a large number of a posteriori error estimations for linear elliptic one-sided obstacle problems. We will show that the estimation for ? z - uh?1 is also suited for double obstacle problems.

Jiantao Gu; Yuxia Tong; Jun Zheng

2010-01-01

378

Analysis of Bias, Variance and Mean Square Estimation Error in Reduced Order Filters.  

National Technical Information Service (NTIS)

The Kalman filter gives the optimal minimum variance, unbiased estimate of the system state. It is shown in this paper that for a ROF one cannot in general obtain an unbiased estimator. The conditional mean of the estimation error is non-zero and, therefo...

R. B. Asher J. C. Ryles

1974-01-01

379

On errors-in-variables regression with arbitrary covariance and its application to optical flow estimation  

Microsoft Academic Search

Linear inverse problems in computer vision, including motion estimation, shape fitting and image reconstruction, give rise to parameter estimation problems with highly cor- related errors in variables. Established total least squares methods estimate the most likely corrections ˆ A andb to a given data matrix (A, b) perturbed by additive Gaussian noise, such that there exists a solution y with

Björn Andres; Claudia Kondermann; Daniel Kondermann; Ullrich Köthe; Fred A. Hamprecht; Christoph S. Garbe

2008-01-01

380

Accounting for uncertainty in systematic bias in exposure estimates used in relative risk regression  

SciTech Connect

In many epidemiologic studies addressing exposure-response relationships, sources of error that lead to systematic bias in exposure measurements are known to be present, but there is uncertainty in the magnitude and nature of the bias. Two approaches that allow this uncertainty to be reflected in confidence limits and other statistical inferences were developed, and are applicable to both cohort and case-control studies. The first approach is based on a numerical approximation to the likelihood ratio statistic, and the second uses computer simulations based on the score statistic. These approaches were applied to data from a cohort study of workers at the Hanford site (1944-86) exposed occupationally to external radiation; to combined data on workers exposed at Hanford, Oak Ridge National Laboratory, and Rocky Flats Weapons plant; and to artificial data sets created to examine the effects of varying sample size and the magnitude of the risk estimate. For the worker data, sampling uncertainty dominated and accounting for uncertainty in systematic bias did not greatly modify confidence limits. However, with increased sample size, accounting for these uncertainties became more important, and is recommended when there is interest in comparing or combining results from different studies.

Gilbert, E.S.

1995-12-01

381

A Posteriori Finite Element Error Estimation for Diffusion Problems  

Microsoft Academic Search

. Adjerid et al. [2] and Yu [19, 20] show that a posteriori estimates of spatial discretizationerrors of piecewise bi-p polynomial finite element solutions of elliptic and parabolic problemson meshes of square elements may be obtained from jumps in solution gradients at element verticeswhen p is odd and from local elliptic or parabolic problems when p is even. We show

Slimane Adjerid; Belkacem Belguendouz; Joseph E. Flaherty

1999-01-01

382

Aircraft parameter estimation using output-error methods  

Microsoft Academic Search

Certification requirements, optimization and minimum project costs, design of flight control laws and the implementation of flight simulators are among the principal applications of inverse problem applications in the aeronautical industry. The problem of aircraft identification and parameter estimation demands for accurate mathematical model of the aerodynamics and adequate experimental flight data gathering and processing. The aircraft dynamic modeling is

Luiz Carlos Sandoval Góes; Elder Moreira Hemerly; Benedito Carlos de Oliveira Maciel; Wilson Rios Neto; CelsoBraga Mendonca; João Hoff

2006-01-01

383

ARMA Spectral estimation: A model equation error procedure  

Microsoft Academic Search

A procedure is presented for generating an ARMA spectral model of a stationary time series based upon a finite set of time series observations. The ARMA model's coefficients are estimated by utilizing a basic difference equation characterizing the underlying rational spectral model. In examples treated to date, this new procedure has been found to produce \\

J. Cadzow

1980-01-01

384

EIA Corrects Errors in Its Drilling Activity Estimates Series  

EIA Publications

The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as aleading indicator of trends in the industry and a barometer of general industry status.

Information Center

1998-03-01

385

Estimating standard errors of accuracy assessment statistics under cluster sampling  

Microsoft Academic Search

Cluster sampling is a viable sampling design for collecting reference data for the purpose of conducting an accuracy assessment of land-cover classifications obtained from remotely sensed data. The formulas for estimating various accuracy parameters such as the overall proportion of pixels correctly classified, the kappa coefficient of agreement, and user's and producer's accuracy are the same under cluster sampling and

Stephen V. Stehman

1997-01-01

386

Estimation with Information Loss: Asymptotic Analysis and Error Bounds  

Microsoft Academic Search

In this paper, we consider a discrete time state estimation problem over a packet-based network. In each discrete time step, the measurement is sent to a Kalman filter with some probability that it is received or dropped. Previous pioneering work on Kalman filtering with intermittent observation losses shows that there exists a certain threshold of the packet dropping rate below

Ling Shi; Michael Epstein; Abhishek Tiwari; Richard M. Murray

2005-01-01

387

Energy norm a posteriori error estimates for mixed finite element methods  

Microsoft Academic Search

This paper deals with the a posteriori error analysis of mixed finite element methods for second order elliptic equations. It is shown that a reliable and efficient error estimator can be constructed using a postprocessed solution of the method. The analysis is performed in two different ways: under a saturation assumption and using a Helmholtz decomposition for vector fields.

Carlo Lovadina; Rolf Stenberg

2006-01-01

388

A posteriori error estimates for mixed finite element approximation of nonlinear quadratic optimal control problems  

Microsoft Academic Search

In this paper, we obtain an a posteriori error analysis for mixed finite element approximation of convex optimal control problems governed by a nonlinear second-order elliptic equation. Our results are based on the approximation for both the coupled state variables and the control variable. We propose to improve the error estimates, which can be used to construct an adaptive finite

Yanping Chen; Zuliang Lu; Min Fu

2012-01-01

389

MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates  

NASA Technical Reports Server (NTRS)

MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

2011-01-01

390

ß Spline Estimation in a Semiparametric Regression Model with Nonlinear Time Series Errors  

Microsoft Academic Search

We study the estimation problems for a partly linear regression model with a nonlinear time series error structure. The model consists of a parametric linear component for the regression coefficients and a nonparametric nonlinear component. The random errors are unobservable and modeled by a first- order Markovian bilinear process. Based on a B-spline series approximation of the nonlinear component function,

Jinhong You; Gemai Chen; Xian Zhou

391

Quantifying the error in estimated transfer functions with application to model order selection  

Microsoft Academic Search

Previous results on estimating errors or error bounds on identified transfer functions have relied on prior assumptions about the noise and the unmodeled dynamics. This prior information took the form of parameterized bounding functions or parameterized probability density functions, in the time or frequency domain with known parameters. It is shown that the parameters that quantify this prior information can

Graham C. Goodwin; Michel Gevers; Brett Ninness

1992-01-01

392

Design of a steam generator water level controller via the estimation of the flow errors  

Microsoft Academic Search

The problem of the water level control of the steam generator makes the plant more vulnerable to high and low level trips. Particularly, the swell and shrink phenomena and the flow errors at low power levels make the level control of steam generators difficult. At this work, the flow errors are regarded as the time-varying parameters and estimated by the

Man Gyun Na

1995-01-01

393

Minimum mean square error estimates of invariant parameters in the quadrate of ISAR signal amplitude  

Microsoft Academic Search

A minimum mean square error (MMSE) method for estimation of the invariant geometric and kinematic parameters of the quadrate amplitude of the inverse synthetic aperture radar (ISAR) signal. is suggested. This method is an alternative to the correlation-spectral techniques used for ISAR signal processing. This computational technique is based on minimizing the mean square error in the approximation of the

Andon Dimitrov Lazarov

2001-01-01

394

Parameter estimation for non-linear continuous-time systems in a bounded error context  

Microsoft Academic Search

This paper deals with guaranteed parameter estimation in a bounded error context for nonlinear continuous-time systems. Perturbations are assumed bounded but otherwise unknown. The solution is a set of parameter vectors consistent with modelling hypotheses, measured data and prior error bounds. The algorithm proposed in this paper does not suffer from initialization problems encountered in the local optimization methods. The

T. Raissi; N. Ramdani; Y. Candau

2003-01-01

395

Errors and parameter estimation in precipitation-runoff modeling 2. Case study.  

USGS Publications Warehouse

A case study is presented which illustrates some of the error analysis, sensitivity analysis, and parameter estimation procedures reviewed in the first part of this paper. It is shown that those procedures, most of which come from statistical nonlinear regression theory, are invaluable in interpreting errors in precipitation-runoff modeling and in identifying appropriate calibration strategies. -Author

Troutman, B. M.

1985-01-01

396

A posteriori error estimates for Markov approximations of Frobenius–Perron operators  

Microsoft Academic Search

We present lower and upper error bounds under the BV-norm for piecewise linear Markov finite approximations to Frobenius–Perron operators in terms of the BV-norm of the residuals, so that a posteriori error estimates can be given. Numerical results using the Monte Carlo implementation are presented to support the theoretical analysis.

Jiu Ding; Congming Jin; Aihui Zhou

2007-01-01

397

Use of State Estimation to Calculate Angle-of-Attack Position Error from Flight Test Data.  

National Technical Information Service (NTIS)

This thesis determined the position errors of an aircraft's angle-of-attack (AOA) sensor using state estimation with flight test data. The position errors were caused by local flow and upwash and were found to be a function of AOA and Mach number. The tes...

T. H. Thacker

1985-01-01

398

Estimating the approximation error when fixing unessential factors in global sensitivity analysis  

Microsoft Academic Search

One of the major settings of global sensitivity analysis is that of fixing non-influential factors, in order to reduce the dimensionality of a model. However, this is often done without knowing the magnitude of the approximation error being produced. This paper presents a new theorem for the estimation of the average approximation error generated when fixing a group of non-influential

I. M. Sobol’; S. Tarantola; D. Gatelli; S. S. Kucherenko; W. Mauntz

2007-01-01

399

Error estimates and adaptive time stepping for various direct time integration methods  

Microsoft Academic Search

In this paper, the global and local error estimates and adaptive time stepping for the various direct time integration in dynamic analysis are presented. A successive quadratic function is used for the locally exact value of the acceleration and the corresponding parameters for the function are obtained from accelerations at three time stations at every time stage. The local error

Chang-Koon Choi; Heung-Jin Chung

1996-01-01

400

RELATING ERROR BOUNDS FOR MAXIMUM CONCENTRATION ESTIMATES TO DIFFUSION METEOROLOGY UNCERTAINTY (JOURNAL VERSION)  

EPA Science Inventory

The paper relates the magnitude of the error bounds of data, used as inputs to a Gaussian dispersion model, to the magnitude of the error bounds of the model output. The research addresses the uncertainty in estimating the maximum concentrations from elevated buoyant sources duri...

401

Identification and estimation of nonlinear models with misclassification error using instrumental variables: A general solution  

Microsoft Academic Search

This paper provides a general solution to the problem of identification and estimation of nonlinear models with misclassification error in a general discrete explanatory variable using instrumental variables. The misclassification error is allowed to be correlated with all the explanatory variables in the model. It is not enough to identify the model by simply generalizing the identification in the binary

Yingyao Hu

2008-01-01

402

A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings  

ERIC Educational Resources Information Center

The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error

Lee, Guemin; Lewis, Daniel M.

2008-01-01

403

Capacity and power allocation for fading MIMO channels with channel estimation error  

Microsoft Academic Search

In this correspondence, we investigate the effect of channel es- timation error on the capacity of multiple-input-multiple-output (MIMO) fading channels. We study lower and upper bounds of mutual informa- tion under channel estimation error, and show that the two bounds are tight for Gaussian inputs. Assuming Gaussian inputs we also derive tight lower bounds of ergodic and outage capacities and

Taesang Yoo; Andrea J. Goldsmith

2006-01-01

404

A novel determination method of thin film optical parameters with least dependence on photometric measurement systematic errors  

NASA Astrophysics Data System (ADS)

We present a novel determination method of thin film optical parameters based on partial derivatives selection, which is with least impact of photometric measurement systematic errors on the characterization accuracy of thin film optical parameters. The spectral measurement data used in our numerical simulations are the single wavelength photometric data of P-polarization light measured at different incident angles. It is shown that, under the same level of systematic errors in the measurement data, the deviations of the fitted optical parameters values from real ones in spectral bands with opposite signs of partial derivatives for most incident angles at which measurement data are collected are much smaller than other spectral regions. The theoretical explanations are discussed. It is advisable to select spectrophotometric data from the recommended spectral bands and to remove the dangerous spectral bands according to derivatives information for optical characterization of thin films with best accuracy.

Wu, Suyong; Long, Xingwu; Yang, Kaiyong; Tan, Zhongqi

2012-06-01

405

Posteriori Error Estimates of Finite Element Solutions of Parametrized Nonlinear Equations.  

National Technical Information Service (NTIS)

Nonlinear differential equations with parameters are called parametrized nonlinear equations. This paper studies a posteriori error estimates of finite element solutions of second order parametrized strongly nonlinear equations in divergence form on one-d...

T. Tsuchiya I. Babuska

1992-01-01

406

Discretization Error Estimation and Exact Solution Generation Using the Method Of Nearby Problems.  

National Technical Information Service (NTIS)

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fit...

A. Raju A. J. Sinclair C. J. Roy M. J. Kurzen T. S. Phillips

2011-01-01

407

Range Profile Specific Optimal Waveforms for Minimum Mean Square Error Estimation.  

National Technical Information Service (NTIS)

Optimal waveforms for minimum mean square error range profile estimation are investigated. An idealized measurement and waveform adaptation process is developed that yields optimal scene and range specific waveforms. This process is idealized in that duri...

R. C. Chen

2010-01-01

408

The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates  

PubMed Central

An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%.

Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

2011-01-01

409

Improved Estimation of Subsurface Magnetic Properties using Minimum Mean- Square Error Methods.  

National Technical Information Service (NTIS)

This thesis proposes an inversion method for the interpretation of complicated geological susceptibility models. The method is based on constrained Minimum Mean-Square Error (MMSE) estimation. The MMSE method allows the incorporation of available prior in...

B. Saether

1997-01-01

410

Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.  

ERIC Educational Resources Information Center

Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

Olejnik, Stephen F.; Algina, James

1987-01-01

411

Simulation Based Landing System Verification — About the Challenges of Non-Linear Error Estimation  

NASA Astrophysics Data System (ADS)

Simulation based verification of non-linear events comes with additional risks. Special focus has to be set on software validation and error estimation. A dedicated uncertainty factor has been derived and evaluated on the basis of terrestrial tests.

Buchwald, R.

2014-06-01

412

Spatio-temporal Error on the Discharge Estimates for the SWOT Mission  

NASA Astrophysics Data System (ADS)

The Surface Water and Ocean Topography (SWOT) mission measures two key quantities over rivers: water surface elevation and slope. Water surface elevation from SWOT will have a vertical accuracy, when averaged over approximately one square kilometer, on the order of centimeters. Over reaches from 1-10 km long, SWOT slope measurements will be accurate to microradians. Elevation (depth) and slope offer the potential to produce discharge as a derived quantity. Estimates of instantaneous and temporally integrated discharge from SWOT data will also contain a certain degree of error. Two primary sources of measurement error exist. The first is the temporal sub-sampling of water elevations. For example, SWOT will sample some locations twice in the 21-day repeat cycle. If these two overpasses occurred during flood stage, an estimate of monthly discharge based on these observations would be much higher than the true value. Likewise, if estimating maximum or minimum monthly discharge, in some cases, SWOT may miss those events completely. The second source of measurement error results from the instrument's capability to accurately measure the magnitude of the water surface elevation. How this error affects discharge estimates depends on errors in the model used to derive discharge from water surface elevation. We present a global distribution of estimated relative errors in mean annual discharge based on a power law relationship between stage and discharge. Additionally, relative errors in integrated and average instantaneous monthly discharge associated with temporal sub-sampling over the proposed orbital tracks are presented for several river basins.

Biancamaria, S.; Alsdorf, D. E.; Andreadis, K. M.; Clark, E.; Durand, M.; Lettenmaier, D. P.; Mognard, N. M.; Oudin, Y.; Rodriguez, E.

2008-12-01

413

Analysis of systematic errors in the calculation of renormalization constants of the topological susceptibility on the lattice  

SciTech Connect

A Ginsparg-Wilson based calibration of the topological charge is used to calculate the renormalization constants which appear in the field-theoretical determination of the topological susceptibility on the lattice. A systematic comparison is made with calculations based on cooling. The two methods agree within present statistical errors (3%-4%). We also discuss the independence of the multiplicative renormalization constant Z from the background topological charge used to determine it.

Alles, B. [INFN, Sezione di Pisa, Pisa (Italy); D'Elia, M. [Dipartimento di Fisica, Universita di Genova and INFN, Genoa (Italy); Di Giacomo, A.; Pica, C. [Dipartimento di Fisica, Universita di Pisa and INFN, Pisa (Italy)

2006-11-01

414

Systematic Errors and SLR Tracking System Performance Over the Past Two Decades  

NASA Astrophysics Data System (ADS)

Satellite Laser Ranging (SLR) will be fifty years old in the coming year. Although the technique has been around longer than any other space geodetic technique and despite the great technological improvements during this time, it remains essentially a measuring technique that is always prone to random and systematic errors just like all others. SLR has always provided unique information for the establishment of geodetic reference frames and in particular, over the past three decades it uniquely defines the origin and in part the scale of the ITRF. The Global Geodetic Observing System - GGOS, has identified the ITRF as the key product as its contribution to the Global Earth Observing System of Systems - GEOSS. In order to meet the stringent goals of GGOS, the ILRS is taking a two-pronged approach: modernizing the engineering components (ground and space segments), and revising the modeling standards to take advantage of recent improvements in many areas of geophysical modeling for system Earth components. As we gain improved understanding of the Earth system components, space geodesy adjusts its underlying modeling of the system to better and more completely describe it. At the same time, from the engineering side we examine the observational measurement process for improvements that will enhance the accuracy of the individual observations and the final SLR products. Since the ITRF is a product based on data collected over several decades during which each SLR system has had a varied performance, the relative weight of each such system within the network is a function of time. Engineering information and quality control studies of each system are valuable sources of information required for this assessment. Repercussions of such variable performance go beyond the relative weight of these systems within the SLR network. Especially in the case of sites with co-located techniques, such information are useful in resolving observed inconsistencies between the techniques, providing thus a resolution and a way of avoiding distorting the ITRF through the constraints of the local survey ties. The establishment of such a data-base that can be updated and expanded as the SLR network evolves, will help improve the contribution of SLR in the establishment of future ITRF realizations.

Pavlis, E. C.; Evans, K. D.; Kuzmicz-Cieslak, M.

2013-12-01

415

Estimation of Error in Western Pacific Geoid Heights Derived from Gravity Data Only  

NASA Astrophysics Data System (ADS)

The goal of the Western Pacific Geoid estimation project was to generate geoid height models for regions in the Western Pacific Ocean, and formal error estimates for those geoid heights, using all available gravity data and statistical parameters of the quality of the gravity data. Geoid heights were to be determined solely from gravity measurements, as a gravimetric geoid model and error estimates for that model would have applications in oceanography and satellite altimetry. The general method was to remove the gravity field associated with a "lower" order spherical harmonic global gravity model from the regional gravity set; to fit a covariance model to the residual gravity, and then calculate the (residual) geoid heights and error estimates by least-squares collocation fit with residual gravity, available statistical estimates of the gravity and the covariance model. The geoid heights corresponding to the lower order spherical harmonic model can be added back to the heights from the residual gravity to produce a complete geoid height model. As input we requested from NGA all unclassified available gravity data in the western Pacific between 15° to 45° N and 105° to 141°W. The total data set that was used to model and estimate errors in gravimetric geoid comprised an unclassified, open file data set (540,012 stations), a proprietary airborne survey of Taiwan (19,234 stations), and unclassified NAVO SSP survey data (95,111 stations), for official use only. Various programs were adapted to the problem including N.K. Pavlis' HSYNTH program and the covariance fit program GPFIT and least-squares collocation program GPCOL from the GRAVSOFT package (Forsberg and Schering, 2008 version) which were modified to handle larger data sets, but in some regions data were still too numerous. Formulas were derived that could be used to block-mean the data in a statistically optimal sense and still retain the error estimates required for the collocation algorithm. Running the covariance fit and collocation on discrete blocks revealed an edge effect on the covariance parameter calculation that produced stepwise discontinuities in the error estimates. To eliminate this, the covariance estimation procedure program was modified to slide along a lattice or grid (defined at runtime) of points, selecting all stations closer than a user defined distance with an error estimate of 5 mGals standard deviation or better from the larger regional data set, and calculating covariance parameters for that location. The collocation program was modified to use these locations and GPFIT parameters, and to select all stations within a close radius, and block mean data with associated error estimates beyond that, to calculate a residual height and error estimates on a grid centered at the covariance fit location. These grids were combined to produce the overall geoid height and error estimate sets. The error estimates, in meters, are plotted as a color-filled contour map masked by land regions. Lack of gravity data causes the area of high estimated error east of the Korean peninsula. The high estimates of error north-west of Taiwan are due not to a lack of data, but rather data with high internal estimates of measurement error or disagreement between different data sets. The tracking visible is the effect of high quality data to reduce errors in gravimetric geoid height models.

Peters, M. F.; Brozena, J. M.

2012-12-01

416

A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces  

Microsoft Academic Search

In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection–diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and

Lili Ju; L. Tian; Desheng Wang

2009-01-01

417

ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve?  

PubMed Central

In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.

Feischl, Michael; Fuhrer, Thomas; Karkulik, Michael; Praetorius, Dirk

2014-01-01

418

An a Posteriori Error Estimator for Two-Body Contact Problems on Non-Matching Meshes  

Microsoft Academic Search

A posteriori error estimates for two-body contact problems are established. The discretization is based on mortar finite elements\\u000a with dual Lagrange multipliers. To define locally the error estimator, Arnold–Winther elements for the stress and equilibrated\\u000a fluxes for the surface traction are used. Using the Lagrange multiplier on the contact zone as Neumann boundary conditions,\\u000a equilibrated fluxes can be locally computed.

Barbara I. Wohlmuth

2007-01-01

419

Effect of channel-estimation error on QAM systems with antenna diversity  

Microsoft Academic Search

This paper studies the effect of channel estimation error and antenna diversity on multilevel quadrature amplitude modulation (M-QAM) systems over Rayleigh fading channels. Based on the characteristic function method, a general closed-form bit-error rate (BER) for M-QAM systems is presented. The ef- fect of the inaccurate channel estimation on the performance for pilot-symbol-assisted modulation M-QAM systems with an- tenna diversity

Bin Xia; Jiangzhou Wang

2005-01-01

420

Consistent estimation in the bilinear multivariate errors-in-variables model  

Microsoft Academic Search

.   A bilinear multivariate errors-in-variables model is considered. It corresponds to an overdetermined set of linear equations\\u000a AXB=C, A??m?n, B??p?q, in which the data A, B, C are perturbed by errors. The total least squares estimator is inconsistent in this case.\\u000a \\u000a ?An adjusted least squares estimator is constructed, which converges to the true value X, as m ??, q ??.

A. Kukush; I. Markovsky; S. Van Huffel

2003-01-01

421

Effect of channel estimation error on M-QAM BER performance in Rayleigh fading  

Microsoft Academic Search

We determine the bit-error rate (BER) of multilevel quadrature amplitude modulation (M-QAM) in flat Rayleigh fading with imperfect channel estimates, Despite its high spectral efficiency, M-QAM is not commonly used over fading channels because of the channel amplitude and phase variation. Since the decision regions of the demodulator depend on the channel fading, estimation error of the channel variation can

Xiaoyi Tang; Mohamed-Slim Alouini; Andrea J. Goldsmith

1999-01-01

422

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.  

SciTech Connect

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

423

Effect of channel estimation error on M-QAM BER performance in Rayleigh fading  

Microsoft Academic Search

We determine the bit error rate (BER) of multi-level quadrature amplitude modulation (M-QAM) in flat Rayleigh fading with imperfect channel estimates. Despite its high spectral efficiency, M-QAM is not commonly used over fading channels because of the channel amplitude and phase variation. Since the decision regions of the demodulator depend on the channel fading, the estimation error of the channel

Xiaoyi Tangt; Mohamed-Slim Alouini; Andrea Goldsmith

1999-01-01

424

Does Computerized Provider Order Entry Reduce Prescribing Errors for Hospital Inpatients? A Systematic Review  

PubMed Central

Previous reviews have examined evidence of the impact of CPOE on medication errors, but have used highly variable definitions of “error”. We attempted to answer a very focused question, namely, what evidence exists that CPOE systems reduce prescribing errors among hospital inpatients? We identified 13 papers (reporting 12 studies) published between 1998 and 2007. Nine demonstrated a significant reduction in prescribing error rates for all or some drug types. Few studies examined changes in error severity, but minor errors were most often reported as decreasing. Several studies reported increases in the rate of duplicate orders and failures to discontinue drugs, often attributed to inappropriate selection from a dropdown menu or to an inability to view all active medication orders concurrently. The evidence-base reporting the effectiveness of CPOE to reduce prescribing errors is not compelling and is limited by modest study sample sizes and designs. Future studies should include larger samples including multiple sites, controlled study designs, and standardized error and severity reporting. The role of decision support in minimizing severe prescribing error rates also requires investigation.

Reckmann, Margaret H.; Westbrook, Johanna I.; Koh, Yvonne; Lo, Connie; Day, Richard O.

2009-01-01

425

The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error  

PubMed Central

Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored.

Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

2012-01-01

426

Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates  

SciTech Connect

We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

2009-01-01

427

Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries  

SciTech Connect

We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gr