For comprehensive and current results, perform a real-time search at Science.gov.

1

Estimation of Systematic Errors in the Canadian Terrestrial Gravity Data From GRACE Gravity Results

NASA Astrophysics Data System (ADS)

Systematic errors in terrestrial gravity data arise from datum errors at the control stations and from elevation and instrumental errors at spot measurements. It is extremely difficult, if at all possible, to estimate and correct these errors through the analysis of the historical records of the gravity projects. The GRACE gravity mission is currently mapping the Earth's gravity field with a homogeneous accuracy better than 1 mGal in gravity, corresponding to a few centimeters in geoid height, for wavelength greater than 300 km. It provides an accurate reference for determining the long-wavelength systematic errors in the terrestrial gravity data. However, the challenge is to remove the short-wavelength components from the terrestrial data in order to eliminate the aliasing errors when estimating systematic errors. In other words, we need an effective low-pass averaging/filtering technique. Several former discussions contribute important insights on characteristics of commonly used methods: Pellinen 1966; Rapp 1977; Gaposchkin 1980; Colombo 1981; Jekeli 1981. In this study, we investigate methods of determining systematic errors in terrestrial gravity through a synthetic gravity field, and apply them to the actual terrestrial gravity data in Canada. EGM96, up to degree and order 360, is used for the synthetic gravity field. First, we test four methods (blockwise, Pellinen's, Gaussian and ideal averaging) to perform low-pass filtering of the EGM96 high frequency gravity field. Second, we derive harmonic gravity models to degree and order 70 from the filtered synthetic field. Third, the new harmonic models are compared to EGM96 (degree and order 70). The best filtering method is expected to give residuals converging towards zero. For the actual gravity field, a harmonic gravity model (degree and order 70) is derived from the filtered Canadian terrestrial gravity data, which are expanded to the entire Earth surface by a GRACE gravity model. This harmonic model is compared to the GRACE gravity model (for the same degree and order) to estimate the systematic errors. Finally, the estimated biases are applied to the terrestrial gravity data for determining a geoid model for Canada, which is validated against GPS/leveling data.

Huang, J.; Véronneau, M.; Mainville, A.

2004-05-01

2

GREAT3 results I: systematic errors in shear estimation and the impact of real galaxy morphology

We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically-varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially-varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety a...

Mandelbaum, Rachel; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A; Donnarumma, Annamaria; Conti, Ian Fenech; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep; Hogg, David W; Huff, Eric M; Jee, M James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C; Marshall, Philip J; Meyers, Joshua E; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Mboula, Fred Maurice Ngole; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D; Shan, Huanyuan; Sheldon, Erin S; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Adami, Kristian Zarb; Zhang, Jun; Zuntz, Joe

2014-01-01

3

GPS meteorology: Reducing systematic errors in geodetic estimates for zenith delay

Differences between long term precipitable water (PW) time series derived from radiosondes, microwave water vapor radiometers, and GPS stations reveal offsets that are often as much as 1-2 mm PW. All three techniques are thought to suffer from systematic errors of order 1 mm PW. Standard GPS processing algorithms are known to be sensitive to the choice of elevation cutoff

Peng Fang; Michael Bevis; Yehuda Bock; Seth Gutman; Dan Wolfe

1998-01-01

4

NASA Astrophysics Data System (ADS)

The axis offset is usually considered as a constant value in the geodetic VLBI analysis. In azimuth-elevation type of telescope the systematic error in the axis offset is mostly projected to the vertical direction, since the influence on the horizontal direction is eliminated by the observation scheme, where the distribution of azimuths of the radio sources is almost uniform. We examined the effect of the axis offset by estimating the coordinates of the Metsähovi radio telescope with various axis offset values. The new value of the axis offset -3.6 mm was estimated from local tie measurements performed during the geo VLBI sessions since 2008. The offset is different from the earlier value +5.1mm estimated using the time delay observations tep{petrov2007}. We investigated the effect of the changing the offset on the coordinates by analyzing the geodetic VLBI campaigns with the old and the new axis offset values. The difference between old and new coordinates shows that the agreement between the vectors from the IGS GPS point METS to the reference point of the VLBI telescope Metsahov calculated from ITRF coordinates and estimated from local tie data could be better when using the new value.

Kallio, U.; Zubko, N.

2013-08-01

5

Estimating Bias Error Distributions

NASA Technical Reports Server (NTRS)

This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

Liu, Tian-Shu; Finley, Tom D.

2001-01-01

6

Estimating GPS Positional Error

NSDL National Science Digital Library

After instructing students on basic receiver operation, each student will make many (10-20) position estimates of 3 benchmarks over a week. The different benchmarks will have different views of the skies or vegetation cover. Each student will download their data into a spreadsheet and calculate horizontal and vertical errors which are collated into a class spreadsheet. The positions are sorted by error and plotted in a cumulative frequency plot. The students are encouraged to discuss the distribution, sources of error, and estimate confidence intervals. This exercise gives the students a gut feeling for confidence intervals and the accuracy of data. Students are asked to compare results from different types of data and benchmarks with different views of the sky. Uses online and/or real-time data Has minimal/no quantitative component Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions

Witte, Bill

7

The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Rd. WT15, Richardson, Texas 75080 (United States)

2009-05-15

8

Error Estimates of Theoretical Models: a Guide

This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

J. Dobaczewski; W. Nazarewicz; P. -G. Reinhard

2014-02-19

9

Systematic error in determination of the absolute intensities of the two-step gamma-cascades after the thermal neutron capture and its influence on the value and localization of extracted from (n,2gamma)-reaction probable level densities and radiative strength functions of dipole gamma-transitions have been analysed. It was found that this error in limits of its possible magnitude cannot change made earlier conclusions about the radiative strength functions of E1- and M1- transitions at Egamma ~ 3 MeV and level density of heavy nucleus below ~0.5Bn.

V. A. Khitrov; Li Chol; A. M. Sukhovoj

2004-04-23

10

Systematic errors in long baseline oscillation experiments

This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

Harris, Deborah A.; /Fermilab

2006-02-01

11

Systematic Errors in measurement of b1

A class of spin observables can be obtained from the relative difference of or asymmetry between cross sections of different spin states of beam or target particles. Such observables have the advantage that the normalization factors needed to calculate absolute cross sections from yields often divide out or cancel to a large degree in constructing asymmetries. However, normalization factors can change with time, giving different normalization factors for different target or beam spin states, leading to systematic errors in asymmetries in addition to those determined from statistics. Rapidly flipping spin orientation, such as what is routinely done with polarized beams, can significantly reduce the impact of these normalization fluctuations and drifts. Target spin orientations typically require minutes to hours to change, versus fractions of a second for beams, making systematic errors for observables based on target spin flips more difficult to control. Such systematic errors from normalization drifts are discussed in the context of the proposed measurement of the deuteron b(1) structure function at Jefferson Lab.

Wood, S A

2014-10-27

12

impact on the accuracy of SWE estimates. In densely forested areas, such as the boreal forest of CanadaMapping random and systematic errors of satellite-derived snow water equivalent observations quantified. In this study, unbiased SWE maps, random error maps and systematic error maps of Eurasia

Walker, Jeff

13

Numerical Error Estimation with UQ

NASA Astrophysics Data System (ADS)

Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

Ackmann, Jan; Korn, Peter; Marotzke, Jochem

2014-05-01

14

Estimation of Heterogeneous Error Variances

WE consider a set of pq observations xij in a two-way classification, which we may suppose represented by the model where alphai and betaj are constants and the ?ij are independent normal random errors with zero means and variances sigmaj2 alphai, betaj and sigmaj2 are unknown, and it is desired to estimate sigmaj2. The problem arises, for example, in investigating

A. S. C. Ehrenberg

1950-01-01

15

Reducing Model Systematic Error through Super Modelling

NASA Astrophysics Data System (ADS)

Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. The general approach to tackle uncertainty is to use an ensemble of several different GCMs. However, ensemble results may smear out major variability, such as the ENSO. Here we take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differed in their convection scheme and climate-related parameters. As climate models show large sensitivity to convection schemes and parameterization, this approach may be a good basis for constructing a super model. We performed experiments with a small set of manually chosen coefficients and also with a learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Different coupling weights were shown to alter the simulated mean climate state. Some improvements were found that suggest a refined strategy for choosing weighting coefficients could lead to even better performance.

Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Duane, Gregory; Wiegerinck, Wim; Hiemstra, Paul

2013-04-01

16

Systematic errors specific to a Snapshot Mueller Matrix Polarimeter

1 Systematic errors specific to a Snapshot Mueller Matrix Polarimeter Matthieu Dubreuil1 with systematic errors specific to a snapshot Mueller matrix polarimeter by wavelength polarization coding its polarimetric signature via the measurement of its Mueller matrix. Since long full Mueller matrix

Paris-Sud XI, UniversitÃ© de

17

Control by model error estimation

NASA Technical Reports Server (NTRS)

Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

Likins, P. W.; Skelton, R. E.

1976-01-01

18

NASA Astrophysics Data System (ADS)

Influence of observational systematic errors and motion model errors on to precision of construction of probable motion regions of small bodies is investigated. Estimation of maximum allowed level for systematic error component when LS-hyperellipsoids always contained object of interest were obtained

Chernitsov, A. M.; Tamarov, V. A.

2003-12-01

19

Error estimation for boundary element method

In this paper, six error indicators obtained from dual boundary integral equations are used for local estimation, which is an essential ingredient for all adaptive mesh schemes in BEM. Computational experiments are carried out for the two-dimensional Laplace equation. The curves of all these six error estimators are in good agreement with the shape of the error curve. The results

M. T. Liang; J. T. Chen; S. S. Yang

1999-01-01

20

Improved Systematic Pointing Error Model for the DSN Antennas

NASA Technical Reports Server (NTRS)

New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

2011-01-01

21

A systematic approach to SER estimation and solutions

This paper describes a method for estimating Soft Error Rate (SER) and a systematic approach to identifying SER solutions. Having a good SER estimate is the first step in identifying if a problem exists and what measures are necessary to solve the problem. In this paper, a high performance processor is used as the base framework for discussion since it

H. T. Nguyen; Y. Yagil

2003-01-01

22

Laser Doppler anemometer measurements using nonorthogonal velocity components - Error estimates

NASA Technical Reports Server (NTRS)

Laser Doppler anemometers (LDAs) that are arranged to measure nonorthogonal velocity components (from which orthogonal components are computed through transformation equations) are more susceptible to calibration and sampling errors than are systems with uncoupled channels. In this paper uncertainty methods and estimation theory are used to evaluate, respectively, the systematic and statistical errors that are present when such devices are applied to the measurement of mean velocities in turbulent flows. Statistical errors are estimated for two-channel LDA data that are either correlated or uncorrelated. For uncorrelated data the directional uncertainty of the measured velocity vector is considered for applications where mean streamline patterns are desired.

Orloff, K. L.; Snyder, P. K.

1982-01-01

23

On the detection of systematic errors in terrestrial laser scanning data

NASA Astrophysics Data System (ADS)

Quality descriptions are parts of the key tasks of geodetic data processing. Systematic errors should be detected and avoided in order to insure the high quality standards required by structural monitoring. In this study, the iterative closest point (ICP) method was invested to detect systematic errors in two overlapping data sets. There are three steps to process the systematic errors: firstly, one of the data sets was transformed to a reference system by the introduction of the Gauss-Helmert (GH) model. Secondly, quadratic form estimation and segmentation methods are proposed to guarantee the overlapping data sets. Thirdly, the ICP method was employed for a finer registration and detecting the systematic errors. A case study was casted in which a dam surface in Germany was scanned by terrestrial laser scanning (TLS) technology. The results indicated that with the conjugation of ICP algorithm the accuracy of the data sets was improved approximately by 1.6 mm.

Wang, Jin; Kutterer, Hansjoerg; Fang, Xing

2012-11-01

24

Systematic parameter errors in inspiraling neutron star binaries.

The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276

Favata, Marc

2014-03-14

25

Adjoint Error Estimation for Elastohydrodynamic Lubrication

papers: Hart, D E; Goodyer, C E; Berzins, M; Jimack, P K; Scales, L E. , "Adjoint error estimationAdjoint Error Estimation for Elastohydrodynamic Lubrication by Daniel Edward Hart Submitted. Chapter 4 contains material also published as part of: Goodyer, C E; Fairlie, R; Hart, D E; Berzins, M

Jimack, Peter

26

Errors in quantum tomography: diagnosing systematic versus statistical errors

NASA Astrophysics Data System (ADS)

A prime goal of quantum tomography is to provide quantitatively rigorous characterization of quantum systems, be they states, processes or measurements, particularly for the purposes of trouble-shooting and benchmarking experiments in quantum information science. A range of techniques exist to enable the calculation of errors, such as Monte-Carlo simulations, but their quantitative value is arguably fundamentally flawed without an equally rigorous way of authenticating the quality of a reconstruction to ensure it provides a reasonable representation of the data, given the known noise sources. A key motivation for developing such a tool is to enable experimentalists to rigorously diagnose the presence of technical noise in their tomographic data. In this work, I explore the performance of the chi-squared goodness-of-fit test statistic as a measure of reconstruction quality. I show that its behaviour deviates noticeably from expectations for states lying near the boundaries of physical state space, severely undermining its usefulness as a quantitative tool precisely in the region which is of most interest in quantum information processing tasks. I suggest a simple, heuristic approach to compensate for these effects and present numerical simulations showing that this approach provides substantially improved performance.

Langford, Nathan K.

2013-03-01

27

Optical Bit Error Rate: An Estimation Methodology

NASA Astrophysics Data System (ADS)

Optical Bit Error Rate: An Estimation Methodology provides an analytical methodology to the estimation of bit error rate of optical digital signals. This presents an extremely important subject in the design of optical communications systems and networks, yet previous to the publication of this book the topic had not been covered holistically. The text lays out an easy-to-understand analytical approach to a highly important and complex subject: bit error rate (BER) estimation of a transmitted signal with a focus on optical transmission. It includes coverage of such important topics as impairments on DWDM optical signals, causes of signal distortion, and identification and estimation of the signal quality by statistical estimation of the bit error rate. The book includes numerous illustrations and examples to make a difficult topic easy to understand. This edition includes a CD-ROM with run-time simulations from a vendor that provides commercial software for the industry.

Kartalopoulos, Stamatios V.

2004-09-01

28

Treatment of systematic errors in land data assimilation systems

NASA Astrophysics Data System (ADS)

Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

Crow, W. T.; Yilmaz, M.

2012-12-01

29

Correcting systematic errors in high-sensitivity deuteron polarization measurements

NASA Astrophysics Data System (ADS)

This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

2012-02-01

30

Systematic errors in power measurements made with a dual six-port ANA

NASA Astrophysics Data System (ADS)

The systematic error in measuring power with a dual 6-port Automatic Network Analyzer was determined. Equations for estimating systematic errors due to imperfections in the test port connector, imperfections in the connector on the power standard, and imperfections in the impedance standards used to calibrate the 6-port for measuring reflection coefficient were developed. These are the largest sources of error associated with the 6-port. For 7 mm connectors, all systematic errors which are associated with the 6-port add up to a worst-case uncertainty of + or - 0.00084 in measuring the ratio of the effective efficiency of a bolometric power sensor relative to that of a standard power sensor.

Hoer, Cletus A.

1989-07-01

31

We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

Zhang Le; Timbie, Peter [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, Paul M.; Wandelt, Benjamin D. [Department of Physics, 1110 W Green Street, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Bunn, Emory F., E-mail: lzhang263@wisc.edu [Physics Department, University of Richmond, Richmond, VA 23173 (United States)

2013-06-01

32

Error Estimates for Numerical Integration Rules

ERIC Educational Resources Information Center

The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.

Mercer, Peter R.

2005-01-01

33

Identification and Remediation of Systematic Error Patterns in Subtraction

ERIC Educational Resources Information Center

The present study investigated 90 elementary teachers' ability to identify two systematic error patterns in subtraction and then prescribe an instructional focus. Presented with two sets of 20 completed subtraction problems comprised of basic facts, computation, and word problems representative of two students' math performance, participants were…

Riccomini, Paul J.

2005-01-01

34

Bayes Error Rate Estimation Using Classifier Ensembles

NASA Technical Reports Server (NTRS)

The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

Tumer, Kagan; Ghosh, Joydeep

2003-01-01

35

Conditional Density Estimation in Measurement Error Problems.

This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

Wang, Xiao-Feng; Ye, Deping

2015-01-01

36

Estimating error rates in bioactivity databases.

Bioactivity databases are routinely used in drug discovery to look-up and, using prediction tools, to predict potential targets for small molecules. These databases are typically manually curated from patents and scientific articles. Apart from errors in the source document, the human factor can cause errors during the extraction process. These errors can lead to wrong decisions in the early drug discovery process. In the current work, we have compared bioactivity data from three large databases (ChEMBL, Liceptor, and WOMBAT) who have curated data from the same source documents. As a result, we are able to report error rate estimates for individual activity parameters and individual bioactivity databases. Small molecule structures have the greatest estimated error rate followed by target, activity value, and activity type. This order is also reflected in supplier-specific error rate estimates. The results are also useful in identifying data points for recuration. We hope the results will lead to a more widespread awareness among scientists on the frequencies and types of errors in bioactivity data. PMID:24160896

Tiikkainen, Pekka; Bellis, Louisa; Light, Yvonne; Franke, Lutz

2013-10-28

37

The Effect of Systematic Error in Forced Oscillation Testing

NASA Technical Reports Server (NTRS)

One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

2012-01-01

38

Discrete-error transport equation for error estimation in CFD

NASA Astrophysics Data System (ADS)

With computational fluid dynamics (CFD) becoming more accepted and more widely used in industry for design and analysis, there is increasing demand for not just more accurate solutions, but also error bounds on the solutions. One major source of error is from the grid or mesh. A number of methods have been developed to quantify errors in solutions of partial differential equations (PDEs) that arise from poor-quality or insufficiently fine grids/meshes. For PDEs of interest to CFD, it has been shown that the error at one location could be generated elsewhere and then transported there, and thus is not a function of the local mesh quality and the local solution. So, a transport equation for error is needed to understand the generation and evolution of errors. Error transport equations have been developed for finite-element methods but not for finite-difference (FD) and finite-volume (FV) methods. In this study, a method is developed for deriving error-transport equations for estimating grid-induced errors in solutions obtained by using FD and FV methods. The error-transport equations derived are discrete in that they depend only on the FD or FV equations and are independent of the PDEs that the FD or FV equations are intended to represent. The usefulness of the DETEs developed was evaluated through test problems based on four one-dimensional (1-D) and two two-dimensional (2-D) PDES. The four 1-D PDEs are the advection-diffusion equation, the wave equation, the inviscid Burgers equation, and the steady Burgers equation. The two 2-D PDEs are the 2-D advection-diffusion equation and the system of Euler equations. For PDEs that are not linear, linearization procedures were proposed and examined. For all test problems based on 1-D PDEs, the residual is modeled by the leading term of the remainder in the modified equation for the FD or FV equation. The residual was also modeled by using functional relationship suggested by data mining, where actual residuals generated by the numerical experiments were fitted by using least-square minimization. For all test problems, grid-independent solutions were generated to assess how well the residuals are modeled and how well grid-induced errors are predicted by the DETEs. Results obtained show that if the actual residuals are used, then the DETEs can predict the grid-induced errors perfectly. This is true for all test problems evaluated, including those based on PDEs that are nonlinear and have time derivatives and for test problems with weak solutions. Results obtained also show that the leading terms of the modified equation is useful in modeling the residual if the grid spacing or cell size is sufficiently small so that the leading terms are bounded, a condition that is often not satisfied in practice. The usefulness of data mining in constructing residuals show the power-law to produce better fit than local linear least square of smoothness, resolution, aspect ratio and solution gradient. However, a more extensive database is needed before this approach can be expected to yield a more generally applicable models for the residual. The usefulness of Euler DETE in predicting grid-induced errors in the Navier-Stokes solutions was also examined. Results obtained show that error predicted by Euler DETE matches very well with the actual error for the high-Reynolds-number Navier-Stokes solutions.

Qin, Yuehui

39

SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

Lee, Khee-Gan, E-mail: lee@astro.princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States)

2012-07-10

40

Investigating Systematic Errors in Iodine Cell Radial Velocity Measurements

NASA Astrophysics Data System (ADS)

Astronomers have made precise stellar radial velocity measurements using an iodine cell as a calibrator since the 1980s. These measurements have led to the discovery of hundreds of extrasolar planets, and have contributed to the characterization of many more. The precision of these measurements is limited by systematic errors caused primarily by the instability of the spectrographs used to acquire data, and which are not properly modeled in the data analysis process. We present an investigation of ways to mitigate and better model these systematic effects in data analysis. Such an improvement in the radial velocity analysis process would be readily applicable to twenty years worth of radial velocity data.

Vanderburg, Andrew; Marcy, G. W.; Johnson, J. A.

2014-01-01

41

Spatial reasoning in the treatment of systematic sensor errors

In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

1988-01-01

42

Efficient error estimation in quantum key distribution

NASA Astrophysics Data System (ADS)

In a quantum key distribution (QKD) system, the error rate needs to be estimated for determining the joint probability distribution between legitimate parties, and for improving the performance of key reconciliation. We propose an efficient error estimation scheme for QKD, which is called parity comparison method (PCM). In the proposed method, the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling. From the simulation results, the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations. Project supported by the National Basic Research Program of China (Grant Nos.2011CBA00200 and 2011CB921200) and the National Natural Science Foundation of China (Grant Nos.61101137, 61201239, and 61205118).

Li, Mo; Treeviriyanupab, Patcharapong; Zhang, Chun-Mei; Yin, Zhen-Qiang; Chen, Wei; Han, Zheng-Fu

2015-01-01

43

Reducing systematic errors in measurements made by a SQUID magnetometer

NASA Astrophysics Data System (ADS)

A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.

Kiss, L. F.; Kaptás, D.; Balogh, J.

2014-11-01

44

Signals of many types of aerosol lidars can be affected with a significant systematic error, if depolarizing scatterers are present in the atmosphere. That error is caused by a polarization-dependent receiver transmission. In this contribution we present an estimation of the magnitude of this systematic error. We show that lidar signals can be biased by more than 20%, if linearly polarized laser light is emitted, if both polarization components of the backscattered light are measured with a single detection channel, and if the receiver transmissions for these two polarization components differ by more than 50%. This signal bias increases with increasing ratio between the two transmission values (transmission ratio) or with the volume depolarization ratio of the scatterers. The resulting error of the particle backscatter coefficient increases with decreasing backscatter ratio. If the particle backscatter coefficients are to have an accuracy better than 5%, the transmission ratio has to be in the range between 0.85 and 1.15. We present a method to correct the measured signals for this bias. We demonstrate an experimental method for the determination of the transmission ratio. We use collocated measurements of a lidar system strongly affected by this signal bias and an unbiased reference system to verify the applicability of the correction scheme. The errors in the case of no correction are illustrated with example measurements of fresh Saharan dust. PMID:19424398

Mattis, Ina; Tesche, Matthias; Grein, Matthias; Freudenthaler, Volker; Müller, Detlef

2009-05-10

45

Estimation of Error Rates in Discriminant Analysis

Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of misclassification for different combinations of sample sizes and different numbers of parameters. The two methods in most common use are found to be significantly poorer than some new methods that are proposed.

Peter A. Lachenbruch; M. Ray Mickey

1968-01-01

46

A Posteriori Error Estimates for Elliptic Problems

results are illustrated by numerical comÂ putations. Key words: adaptive finite element methods, a--adjoint elliptic boundary value problem, which is approximated by some ~ u 2 S, S being a suitable finite element for the theoretical analysis of such error estimates. We further clarify the relation to other concepts. Our analysis

47

Lidar aerosol backscatter measurements - Systematic, modeling, and calibration error considerations

NASA Technical Reports Server (NTRS)

Sources of systematic, modeling, and calibration errors that affect the interpretation and calibration of lidar aerosol backscatter data are discussed. The treatment pertains primarily to ground-based pulsed CO2 lidars that probe the troposphere and are calibrated using hard calibration targets. However, a large part of the analysis is relevant to other types of lidar system such as lidars operating at other wavelengths; CW focused lidars; airborne or earth-orbiting lidars; lidars measuring other regions of the atmosphere; lidars measuring nonaerosol elastic or inelastic backscatter; and lidars employing other calibration techniques.

Kavaya, M. J.; Menzies, R. T.

1985-01-01

48

Incidence of medication errors and adverse drug events in the ICU: a systematic review

BackgroundMedication errors (MEs) and adverse drug events (ADEs) are both common and under-reported in the intensive care setting. The definitions of these terms vary substantially in the literature. Many methods have been used to estimate their incidence.MethodsA systematic review was done to assess methods used for tracking unintended drug events in intensive care units (ICUs). Studies published up to 22

Amanda Wilmer; Kimberley Louie; Peter Dodek; Hubert Wong; Najib Ayas

2010-01-01

49

Correction of systematic errors in quantitative proton density mapping.

Interest in techniques yielding quantitative information about brain tissue proton densities is increasing. In general, all parameters influencing the signal amplitude are mapped in several acquisitions and then eliminated from the image data to obtain pure proton density weighting. Particularly, the measurement of the receiver coil sensitivity profile is problematic. Several methods published so far are based on the reciprocity theorem, assuming that receive and transmit sensitivities are identical. Goals of this study were (1) to determine quantitative proton density maps using an optimized variable flip angle method for T(1) mapping at 3 T, (2) to investigate if systematic errors can arise from insufficient spoiling of transverse magnetization, and (3) to compare two methods for mapping the receiver coil sensitivity, based on either the reciprocity theorem or bias field correction. Results show that insufficient spoiling yields systematic errors in absolute proton density of about 3-4 pu. A correction algorithm is proposed. It is shown that receiver coil sensitivity mapping based on the reciprocity theorem yields erroneous proton density values, whereas reliable data are obtained with bias field correction. Absolute proton density values in different brain areas, evaluated on six healthy subjects, are in excellent agreement with recent literature results. PMID:22144171

Volz, Steffen; Nöth, Ulrike; Deichmann, Ralf

2012-07-01

50

Estimation of Satellite-Rainfall Error Correlation

NASA Astrophysics Data System (ADS)

With many satellite rainfall products being available for long periods, it is important to assess and validate the algorithms estimating the rainfall rates for these products. Many studies have been done on evaluating the uncertainty of satellite rainfall products over different parts of the world by comparing them to rain-gauge and/or radar rainfall products. In preparation for the field experiment Iowa Flood Studies, or IFloodS, one of the integrated validation activities of the Global Precipitation Measurement mission, we are evaluating three popular satellite-based products for the IFloodS domain of the upper Midwest in the US. One of the relevant questions is the determination of the covariance (correlation) of rainfall errors in space and time for the domain. Three satellite rainfall products have been used in this study, and a radar rainfall product has been used as a ground reference. The three rainfall products are TRMM's TMPA 3B42 V7, CPC's CMORPH and CHRS at UCI's PERSIANN. All the satellite rainfall products used in this study represent 3 hourly, quarter degree, rainfall accumulation. Our ground reference is NCEP Stage IV radar-rainfall, which is available in an hourly, four kilometers, resolution. We discuss the adequacy of the Stage IV product as a ground reference for evaluating the satellite products. We used our rain gauge network in Iowa to evaluate the performance of the Stage IV data on different spatial and temporal scales. While arguably this adequacy is only marginal, we used the radar products to study the spatial and temporal correlation of the satellite product errors. We studied the behavior of the errors, defined as the difference between the satellite and radar product (with matched space time resolution), during the period from the year 2004 through the year 2010. Our results show that the error behavior of the satellite rainfall products is quite similar. Errors are less correlated during warm seasons and the errors of CMORPH and PERSIANN are more correlated than those of TRMM through the study period. We calculated the correlation distance for the different products and it was approximately 75 km. The results also show that the correlation decays considerably with time lag. Our results have implications for the hydrologic studies using satellite data as the error correlation determines basin scales that effectively can filter out the random errors.

ElSaadani, Mohamed; Krajewski, Witold; Seo, Bong Chul; Goska, Radoslaw

2013-04-01

51

Ultraspectral Sounding Retrieval Error Budget and Estimation

NASA Technical Reports Server (NTRS)

The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

2011-01-01

52

Although the free energy perturbation procedure is exact when an infinite sample of configuration space is used, for finite sample size there is a systematic error resulting in hysteresis for forward and backward simulations. The qualitative behavior of this systematic error is first explored for a Gaussian distribution, then a first-order estimate of the error for any distribution is derived. To first order the error depends only on the fluctuations in the sample of potential energies, {Delta}E, and the sample size, n, but not on the magnitude of {Delta}E. The first-order estimate of the systematic sample-size error is used to compare the efficiencies of various computing strategies. It is found that slow-growth, free energy perturbation calculations will always have lower errors from this source than window-growth, free energy perturbation calculations for the same computing effort. The systematic sample-size errors can be entirely eliminated by going to thermodynamic integration rather than free energy perturbation calculations. When {Delta}E is a very smooth function of the coupling parameter, {lambda}, thermodynamic integration with a relatively small number of windows is the recommended procedure because the time required for equilibration is reduced with a small number of windows. These results give a method of estimating this sample-size hysteresis during the course of a slow-growth, free energy perturbation run. This is important because in these calculations time-lag and sample-size errors can cancel, so that separate methods of estimating and correcting for each are needed. When dynamically modified window procedures are used, it is recommended that the estimated sample-size error be kept constant, not that the magnitude of {Delta}E be kept constant. Tests on two systems showed a rather small sample-size hysteresis in slow-growth calculations except in the first stages of creating a particle, where both fluctuations and sample-size hysteresis are large.

Wood, R.H.; Muehlbauer, W.C.F. (Univ. of Delaware, Newark (United States)); Thompson, P.T. (Swarthmore Coll., PA (United States))

1991-08-22

53

The results to detect the static systematic errors of satellite theodolites in seven astronomical observations or stations, Chinese Academy of Sciences, have been given in this paper. According to the structure on optics and mechanism of theodolites, a model with 10 error parameters for static systematic errors of theodolites is proposed. Further, a method for fast searching appointed stars is

Qinchang Lin; Youjun Deng; Chenngqiang Li

1999-01-01

54

Systematic Residual Ionospheric Error in the Radio Occultation Data

NASA Astrophysics Data System (ADS)

The Radio Occultation (RO) method is used to study the Earth's atmosphere in the troposphere and lower stratosphere. The path of a transmitted electromagnetic signal from a GPS satellite changes when passing through the ionosphere and neutral atmosphere. The altered signal is detected at a receiving Low Earth Orbit satellite and provides information about atmospheric parameters such as the refractivity of the Earth's atmosphere and in a further processing step, e.g., pressure or temperature. The processing of the RO data has been done at the Wegener Center for Climate and Global Change. Different corrections are applied on the data, such as a kinematic Doppler correction, induced by the moving satellites, and an ionospheric correction due to the ionosphere dispersive nature. The standard ionospheric correction enters via a series expansion, which is truncated after first order and the correction term is proportional to the inverse square of the carrier frequency. Due to this approximation we conjecture there to be still an ionospheric residual error in the RO data, which does not fully address the change of ionization in the day to night time, and at times of high and low solar activity. This residual ionospheric error is studied by analyzing the bending angle bias (and noise). It is obtained by comparing the bending angle profiles to Mass Spectrometer and Incoherent Scatter Radar (MSIS) climatology in an altitude between 65 km and 80 km. In order to detect the residual ionospheric induced error we investigate the bias over a time period from 2001 to 2010, using CHAMP and COSMIC RO data. The day to night time bias and noise are compared for different latitudinal zones. We focus on zones between 20°N to 60°N, 20°S to 20°N and 60°S to 20°S. Our analysis shows a difference between the day and night time bias. While the night time bias is roughly constant over time, the day time bias increases in the years of high solar activity, and decreases in the years of low solar activity. The aim of our analysis is to quantify this systematic residual error in order to perform an advanced ionospheric correction in the processing of the RO data.

Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

2012-04-01

55

Duality based error estimation for electrostatic force computation 4. November 2010 1 / 33 Duality based error estimation for electrostatic force computation Author: Simon Pintarelli Supervisor: Prof. Ralf Hiptmair 4. November 2010 #12;Duality based error estimation for electrostatic force computation 4

Hiptmair, Ralf

56

A Test for Large-Scale Systematic Errors in Maps of Galactic Reddening

Accurate maps of Galactic reddening are important for a number of applications, such as mapping the peculiar velocity field in the nearby Universe. Of particular concern are systematic errors which vary slowly as a function of position on the sky, as these would induce spurious bulk flow. We have compared the reddenings of Burstein & Heiles (BH) and those of Schlegel, Finkbeiner & Davis (SFD) to independent estimates of the reddening, for Galactic latitudes |b| > 10. Our primary source of Galactic reddening estimates comes from comparing the difference between the observed B-V colors of early-type galaxies, and the predicted B-V color determined from the B-V--Mg_2 relation. We have fitted a dipole to the residuals in order to look for large-scale systematic deviations. There is marginal evidence for a dipolar residual in the comparison between the SFD maps and the observed early-type galaxy reddenings. If this is due to an error in the SFD maps, then it can be corrected with a small (13%) multiplicative dipole term. We argue, however, that this difference is more likely to be due to a small (0.01 mag.) systematic error in the measured B-V colors of the early-type galaxies. This interpretation is supported by a smaller, independent data set (globular cluster and RR Lyrae stars), which yields a result inconsistent with the early-type galaxy residual dipole. BH reddenings are found to have no significant systematic residuals, apart from the known problem in the region 230 < l < 310, -20 < b < 20.

Michael J. Hudson

1998-12-19

57

Minor Planet Observations to Identify Reference System Systematic Errors

NASA Astrophysics Data System (ADS)

In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.

Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.

2011-04-01

58

A study of systematic errors in the PMD CamBoard nano

NASA Astrophysics Data System (ADS)

Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

Chow, Jacky C. K.; Lichti, Derek D.

2013-04-01

59

Reducing Model Systematic Error over Tropical Pacific through SUMO Approach

NASA Astrophysics Data System (ADS)

Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. We take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differ only in their convection scheme. As climate models show large sensitivity to convection schemes, this approach may be a good basis for constructing a super model. We performed experiments with a machine learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Furthermore, the model with optimal coefficients has not only good performance over the surface temperature and precipitation, but also the positive Bjerknes feedback and the negative heat flux feedback match observations/reanalysis well, leading to a substantially improved simulation of ENSO.

Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Wiegerinck, Wim; Duane, Gregory

2014-05-01

60

A posteriori pointwise error estimates for the boundary element method

This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics

1995-01-01

61

Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

NASA Technical Reports Server (NTRS)

A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

Adler, Robert; Gu, Guojun; Huffman, George

2012-01-01

62

Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

NASA Astrophysics Data System (ADS)

Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid discriminator of whether or not the estimator provides accurate estimates of the gradient magnitude and orientation. This research was funded by WIPP programs administered by the U.S Department of Energy. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

McKenna, S. A.; Wahi, A. K.

2003-12-01

63

CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

NASA Technical Reports Server (NTRS)

Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

2012-01-01

64

A Note on Confidence Interval Estimation and Margin of Error

ERIC Educational Resources Information Center

Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

Gilliland, Dennis; Melfi, Vince

2010-01-01

65

Inertial and Magnetic Sensor Data Compression Considering the Estimation Error

This paper presents a compression method for inertial and magnetic sensor data, where the compressed data are used to estimate some states. When sensor data are bounded, the proposed compression method guarantees that the compression error is smaller than a prescribed bound. The manner in which this error bound affects the bit rate and the estimation error is investigated. Through the simulation, it is shown that the estimation error is improved by 18.81% over a test set of 12 cases compared with a filter that does not use the compression error bound. PMID:22454564

Suh, Young Soo

2009-01-01

66

Impact of Systematic Errors in Sunyaev-Zel'dovich Surveys of Galaxy Clusters

Future high-resolution microwave background measurements hold the promise of detecting galaxy clusters throughout our Hubble volume through their Sunyaev-Zel'dovich (SZ) signature, down to a given limiting flux. The number density of galaxy clusters is highly sensitive to cluster mass through fluctuations in the matter power spectrum, as well as redshift through the comoving volume and the growth factor. This sensitivity in principle allows tight constraints on such quantities as the equation of state of dark energy and the neutrino mass. We evaluate the ability of future cluster surveys to measure these quantities simultaneously when combined with PLANCK-like CMB data. Using a simple effective model for uncertainties in the cluster mass-SZ flux relation, we evaluate systematic shifts in cosmological constraints from cluster SZ surveys. We find that a systematic bias of 10% in cluster mass measurements can give rise to shifts in cosmological parameter estimates at levels larger than the $1\\sigma$ statistical errors. Systematic errors are unlikely to be detected from the mass and redshift dependence of cluster number counts alone; increasing survey size has only a marginal effect. Implications for upcoming experiments are discussed.

Matthew R. Francis; Rachel Bean; Arthur Kosowsky

2005-11-15

67

Systematic errors in weak lensing: application to SDSS galaxy-galaxy weak lensing

Weak lensing is emerging as a powerful observational tool to constrain cosmological models, but is at present limited by an incomplete understanding of many sources of systematic error. Many of these errors are multiplicative and depend on the population of background galaxies. We show how the commonly cited geometric test, which is rather insensitive to cosmology, can be used as a ratio test of systematics in the lensing signal at the 1 per cent level. We apply this test to the galaxy-galaxy lensing analysis of the Sloan Digital Sky Survey (SDSS), which at present is the sample with the highest weak lensing signal to noise and has the additional advantage of spectroscopic redshifts for lenses. This allows one to perform meaningful geometric tests of systematics for different subsamples of galaxies at different mean redshifts, such as brighter galaxies, fainter galaxies and high-redshift luminous red galaxies, both with and without photometric redshift estimates. We use overlapping objects between SDSS and th...

Mandelbaum, R; Seljak, U; Guzik, J; Padmanabhan, N; Blake, C; Blanton, M R; Lupton, R; Brinkmann, J; Mandelbaum, Rachel; Hirata, Christopher M.; Seljak, Uros; Guzik, Jacek; Padmanabhan, Nikhil; Blake, Cullen; Blanton, Michael R.; Lupton, Robert; Brinkmann, Jonathan

2005-01-01

68

Estimating IMU heading error from SAR images.

Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

Doerry, Armin Walter

2009-03-01

69

MEAN SQUARED ERROR ESTIMATION FOR SMALL AREAS WHEN THE SMALL AREA VARIANCES ARE ESTIMATED

This paper suggests a generalization to Prasad and Rao's estimator for the mean squared errors of small area estimators. This new approach uses the conditional mean squared error estimator of Rivest and Belmonte (2000) as an intermediate step in the derivation. It is used in this paper to incorporate, in the mean squared error estimator for a small area, uncertainty

Louis-Paul Rivest; Nathalie Vandal; L.-P. Rivest

2003-01-01

70

Probabilistic state estimation in regimes of nonlinear error growth

State estimation, or data assimilation as it is often called, is a key component of numerical weather prediction (NWP). Nearly all implementable methods of state estimation suitable for NWP are forced to assume that errors ...

Lawson, W. Gregory, 1975-

2005-01-01

71

Systematic errors in weak lensing: application to SDSS galaxy-galaxy weak lensing

Weak lensing is emerging as a powerful observational tool to constrain cosmological models, but is at present limited by an incomplete understanding of many sources of systematic error. Many of these errors are multiplicative and depend on the population of background galaxies. We show how the commonly cited geometric test, which is rather insensitive to cosmology, can be used as a ratio test of systematics in the lensing signal at the 1 per cent level. We apply this test to the galaxy-galaxy lensing analysis of the Sloan Digital Sky Survey (SDSS), which at present is the sample with the highest weak lensing signal to noise and has the additional advantage of spectroscopic redshifts for lenses. This allows one to perform meaningful geometric tests of systematics for different subsamples of galaxies at different mean redshifts, such as brighter galaxies, fainter galaxies and high-redshift luminous red galaxies, both with and without photometric redshift estimates. We use overlapping objects between SDSS and the DEEP2 and 2SLAQ spectroscopic surveys to establish accurate calibration of photometric redshifts and to determine the redshift distributions for SDSS. We use these redshift results to compute the projected surface density contrast DeltaSigma around 259 609 spectroscopic galaxies in the SDSS; by measuring DeltaSigma with different source samples we establish consistency of the results at the 10 per cent level (1-sigma). We also use the ratio test to constrain shear calibration biases and other systematics in the SDSS survey data to determine the overall galaxy-galaxy weak lensing signal calibration uncertainty. We find no evidence of any inconsistency among many subsamples of the data.

Rachel Mandelbaum; Christopher M. Hirata; Uros Seljak; Jacek Guzik; Nikhil Padmanabhan; Cullen Blake; Michael R. Blanton; Robert Lupton; Jonathan Brinkmann

2005-08-10

72

NASA Technical Reports Server (NTRS)

We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

2003-01-01

73

Systematic biases in parameter estimation of binary black-hole mergers

Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched-filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratio (SNR). These biases grow to be comparable to the statistical errors at high ground-based-instrument SNRs (SNR=50), but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors, but for astrophysical black hole mass estimates the absolute biases (of at most a few percent) are still fairly small.

Tyson B. Littenberg; John G. Baker; Alessandra Buonanno; Bernard J. Kelly

2012-10-02

74

Robust identification of fuzzy model on H? error estimation

The paper proposes a new fuzzy identification method based on H? error estimation for the issues of robust identification of fuzzy model. The H? state estimation is applied to the parameter identification of fuzzy model in the paper. The presented algorithm not only guarantees to satisfy a specified level of robustness, and also provides an optimized error upper bound. Finally,

Hongwei Wang; Jia Wang; Hong Gu

2010-01-01

75

POINTWISE ERROR ESTIMATES FOR RELAXATION APPROXIMATIONS TO CONSERVATION LAWS

POINTWISE ERROR ESTIMATES FOR RELAXATION APPROXIMATIONS TO CONSERVATION LAWS EITAN TADMOR AND TAO that the maximum principle can be applied. Key words. conservation laws, error estimates, relaxation method@fisher.math.hkbu.edu.hk). 870 #12;RELAXATION APPROXIMATIONS TO CONSERVATION LAWS 871 dissipative mechanism for discontinuities

Soatto, Stefano

76

A Priori Error Estimates for Some Discontinuous Galerkin Immersed ...

estimate in a mesh-dependant energy norm is derived, and this error estimate ...... “ideal” error indicator to guide the local refinement just for a proof of concept. .... Next, we report the performance of the adaptive DG-IFE method for solving the

2015-01-12

77

Fisher classifier and its probability of error estimation

NASA Technical Reports Server (NTRS)

Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

Chittineni, C. B.

1979-01-01

78

Finite element error estimation and adaptivity based on projected stresses

This report investigates the behavior of a family of finite element error estimators based on projected stresses, i.e., continuous stresses that are a least squared error fit to the conventional Gauss point stresses. An error estimate based on element force equilibrium appears to be quite effective. Examples of adaptive mesh refinement for a one-dimensional problem are presented. Plans for two-dimensional adaptivity are discussed. 12 refs., 82 figs.

Jung, J.

1990-08-01

79

Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes

A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions. PMID:21666841

Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.

2011-01-01

80

Improved Error Estimate for the Valence Approximation

We construct a systematic mean-field-improved coupling constant and quark loop expansion for corrections to the valence (quenched) approximation to vacuum expectation values in the lattice formulation of QCD. Terms in the expansion are evaluated by a combination of weak coupling perturbation theory and a Monte Carlo algorithm.

W. Lee; D. Weingarten

1998-04-10

81

Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect

NASA Technical Reports Server (NTRS)

Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

Sulkanen, Martin E.; Patel, Sandeep K.

1998-01-01

82

Least Relative Error Estimation Yuanyuan Lin

, is useful in analyzing data with positive responses, such as stock prices or life times, in many practical applications, especially in treating, for example, stock price data, the relative error of stock returns in Hong Kong Stock Exchange. This was joint work with Kani Chen, Shaojun Guo, and Zhiliang

Jin, Jiashun

83

Error estimates on averages of correlated data

NASA Astrophysics Data System (ADS)

We describe how the true statistical error on an average of correlated data can be obtained with ease and efficiency by a renormalization group method. The method is illustrated with numerical and analytical examples, having finite as well as infinite range correlations.

Flyvbjerg, H.; Petersen, H. G.

1989-07-01

84

PDE-constrained optimization with error estimation and control

NASA Astrophysics Data System (ADS)

The paper describes an algorithm for PDE-constrained optimization that controls numerical errors using error estimates and grid adaptation during the optimization process. A key aspect of the algorithm is the use of adjoint variables to estimate errors in the first-order optimality conditions. Multilevel optimization is used to drive the optimality conditions and their estimated errors below a specified tolerance. The error estimate requires two additional adjoint solutions, but only at the beginning and end of each optimization cycle. Moreover, the adjoint systems can be formed and solved with limited additional infrastructure beyond that found in typical PDE-constrained optimization algorithms. The approach is general and can accommodate both reduced-space and full-space formulations of the optimization problem. The algorithm is illustrated using the inverse design of a nozzle constrained by the quasi-one-dimensional Euler equations.

Hicken, J. E.; Alonso, J. J.

2014-04-01

85

A Systematic Review of Software Development Cost Estimation Studies

This paper aims to provide a basis for the improvement of software-estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. A Web-based library of these cost estimation papers is provided to ease

M. Jorgensen; Martin Shepperd

2007-01-01

86

Analysis of possible systematic errors in the Oslo method

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of level density and gamma-ray transmission coefficient from a set of particle-gamma coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

A. C. Larsen; M. Guttormsen; M. Krticka; E. Betak; A. Bürger; A. Görgen; H. T. Nyhus; J. Rekstad; A. Schiller; S. Siem; H. K. Toft; G. M. Tveten; A. V. Voinov; K. Wikan

2012-11-27

87

Analysis of possible systematic errors in the Oslo method

In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K. [Department of Physics, University of Oslo, N-0316 Oslo (Norway); Krticka, M. [Institute of Particle and Nuclear Physics, Charles University, Prague (Czech Republic); Betak, E. [Institute of Physics SAS, 84511 Bratislava (Slovakia); Faculty of Philosophy and Science, Silesian University, 74601 Opava (Czech Republic); Schiller, A.; Voinov, A. V. [Department of Physics and Astronomy, Ohio University, Athens, Ohio 45701 (United States)

2011-03-15

88

This paper addresses an innovative method for the measurement and correction of systematic odometry errors caused by the kinematics imperfections in the differential drive mobile robots. An occasional systematic calibration of the mobile robot increases the odometric accuracy and reduces operational cost, as less frequent absolute positioning updates are required during the operation. Conventionally, the tests used for this purpose

Tanveer Abbas; M. Arif; W. Ahmed

2006-01-01

89

NASA Astrophysics Data System (ADS)

The surface gravity data collected via traditional techniques such as ground-based, shipboard and airborne gravimetry describe precisely the local gravity field, but they are often biased by systematic errors. On the other hand, the spherical harmonic gravity models determined from satellite missions, in particular, recent models from CHAMP and GRACE, homogenously and accurately describe the low-degree components of the Earth's gravity field. However, they are subject to large omission errors. The surface and satellite gravity data are therefore complementary in terms of spectral composition. In this paper, we aim to assess the systematic errors of low spherical harmonic degrees in the surface gravity anomalies over North America using a GRACE gravity model. A prerequisite is the extraction of the low-degree components from the surface data to make them compatible with GRACE data. Three types of methods are tested using synthetic data: low-pass filtering, the inverse Stokes integral, and spherical harmonic analysis. The results demonstrate that the spherical harmonic analysis works best. Eighty-five per cent of difference between the synthetic gravity anomalies generated from EGM96 and GGM02S from degrees 2 to 90 can be modelled for a region covering North America and neighbouring areas. Assuming EGM96 is developed solely from the surface gravity data with the same accuracy and GGM02S errorless, one way to understand the 85 per cent difference is that it represents the systematic error from the region of study, while the remaining 15 per cent originates from the data outside of the region. To estimate systematic errors in the surface gravity data, Helmert gravity anomalies are generated from both surface and GRACE data on the geoid. Their differences are expanded into surface spherical harmonics. The results show that the systematic errors for degrees 2 to 90 range from about -6 to 13 mGal with a RMS value of 1.4 mGal over North America. A few significant data gaps can be identified from the resulting error map. The errors over oceans appear to be related to the sea surface topography. These systematic errors must be taken into consideration when the surface gravity data are used to validate future satellite gravity missions.

Huang, J.; Véronneau, M.; Mainville, A.

2008-10-01

90

Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering

Fast Error Estimates For Indirect Measurements: Applications To Pavement Engineering Carlos that is difficult to measure directly (e.g., lifetime of a pavement, efficiency of an engine, etc). To estimate y computation time. As an example of this methodology, we give pavement lifetime estimates. This work

Kreinovich, Vladik

91

NASA Astrophysics Data System (ADS)

Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

92

Systematic errors in weak lensing: application to SDSS galaxy-galaxy weak lensing

Weak lensing is emerging as a powerful observational tool to constrain cosmological models, but is at present limited by an incomplete understanding of many sources of systematic error. Many of these errors are multiplicative and depend on the population of background galaxies. We show how the commonly cited geometric test, which is rather insensitive to cosmology, can be used as

Rachel Mandelbaum; Christopher M. Hirata; Uros Seljak; Jacek Guzik; Nikhil Padmanabhan; Cullen Blake; Michael R. Blanton; Robert Lupton; Jonathan Brinkmann

2005-01-01

93

FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST

FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST Zhi Li1 Protection, error control 1. INTRODUCTION Advances in video and networking technologies have made and lightning strikes. Depending on the duration, impulse noise can be put into three categories, namely

Girod, Bernd

94

Report no. 05/17 Sharp error estimates for discretisations

approximations of the constant coefficient 1D convection/diffusion equation with Dirac initial data. The errorReport no. 05/17 Sharp error estimates for discretisations of the 1D convection/diffusion equation and phrases: convection/diffusion equation, Crank-Nicolson time-marching, Rannacher startup, Dirac initial

Giles, Mike

95

Estimation of scattering error in spectrophotometric measurements of light absorption

Estimation of scattering error in spectrophotometric measurements of light absorption by aquatic scattering error in measurements of light absorption by aquatic particles with a typical laboratory double function of particles. We applied this method to absorption mea- surements made on marine phytoplankton

Stramski, Dariusz

96

Estimating the sources of motor errors for adaptation and generalization

Motor adaptation is usually defined as the process by which our nervous system produces accurate movements, while the properties of our bodies and our environment continuously change. Numerous experimental and theoretical studies have characterized this process by assuming that the nervous system uses internal models to compensate for motor errors. Here we extend these approaches and construct a probabilistic model that not only compensates for motor errors but estimates the sources of these errors. These estimates dictate how the nervous system should generalize. For example, estimated changes of limb properties will affect movements across the workspace but not movements with the other limb. We provide evidence that many movement generalization phenomena emerge from a strategy by which the nervous system estimates the sources of our motor errors. PMID:19011624

Berniker, Max; Kording, Konrad

2009-01-01

97

Using doppler radar images to estimate aircraft navigational heading error

A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

2012-07-03

98

A Systematic Review of Software Development Cost Estimation Studies

1 A Systematic Review of Software Development Cost Estimation Studies Magne JÃ¸rgensen, Simula identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research provide recommendations for future software cost estimation research: 1) Increase the breadth

99

A Systematic Review of Software Development Cost Estimation Studies

This paper aims to provide a basis for the improvement of software estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research topic, estimation approach, research approach, study context and data set. Based on the review, we provide recommendations for future software cost

Magne Jørgensen; Martin J. Shepperd

2007-01-01

100

Evaluating concentration estimation errors in ELISA microarray experiments

Background Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to estimate a protein's concentration in a sample. Deploying ELISA in a microarray format permits simultaneous estimation of the concentrations of numerous proteins in a small sample. These estimates, however, are uncertain due to processing error and biological variability. Evaluating estimation error is critical to interpreting biological significance and improving the ELISA microarray process. Estimation error evaluation must be automated to realize a reliable high-throughput ELISA microarray system. In this paper, we present a statistical method based on propagation of error to evaluate concentration estimation errors in the ELISA microarray process. Although propagation of error is central to this method and the focus of this paper, it is most effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization, and statistical diagnostics when evaluating ELISA microarray concentration estimation errors. Results We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of concentration estimation errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error. We summarize the results with a simple, three-panel diagnostic visualization featuring a scatterplot of the standard data with logistic standard curve and 95% confidence intervals, an annotated histogram of sample measurements, and a plot of the 95% concentration coefficient of variation, or relative error, as a function of concentration. Conclusions This statistical method should be of value in the rapid evaluation and quality control of high-throughput ELISA microarray analyses. Applying propagation of error to a variety of ELISA microarray concentration estimation models is straightforward. Displaying the results in the three-panel layout succinctly summarizes both the standard and sample data while providing an informative critique of applicability of the fitted model, the uncertainty in concentration estimates, and the quality of both the experiment and the ELISA microarray process. PMID:15673468

Daly, Don Simone; White, Amanda M; Varnum, Susan M; Anderson, Kevin K; Zangar, Richard C

2005-01-01

101

Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

NASA Technical Reports Server (NTRS)

Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.

Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

2000-01-01

102

Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

NASA Technical Reports Server (NTRS)

Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.

Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

2000-01-01

103

Error Estimates for Generalized Barycentric Interpolation

We prove the optimal convergence estimate for first order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions. PMID:23338826

Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

2011-01-01

104

Stability and error estimation for Component Adaptive Grid methods

NASA Technical Reports Server (NTRS)

Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

Oliger, Joseph; Zhu, Xiaolei

1994-01-01

105

Period Error Estimation for the Kepler Eclipsing Binary Catalog

NASA Astrophysics Data System (ADS)

The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg2 Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log ? P ? - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods >=62.5 days have KEBC period errors of ~0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

Mighell, Kenneth J.; Plavchan, Peter

2013-06-01

106

PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG

The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

Mighell, Kenneth J. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Plavchan, Peter [NASA Exoplanet Science Institute, California Institute of Technology, Pasadena, CA 91125 (United States)

2013-06-15

107

An Empirical State Error Covariance Matrix for Batch State Estimation

NASA Technical Reports Server (NTRS)

State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

Frisbee, Joseph H., Jr.

2011-01-01

108

Estimate of higher order ionospheric errors in GNSS positioning

NASA Astrophysics Data System (ADS)

Precise navigation and positioning using GPS/GLONASS/Galileo require the ionospheric propagation errors to be accurately determined and corrected for. Current dual-frequency method of ionospheric correction ignores higher order ionospheric errors such as the second and third order ionospheric terms in the refractive index formula and errors due to bending of the signal. The total electron content (TEC) is assumed to be same at two GPS frequencies. All these assumptions lead to erroneous estimations and corrections of the ionospheric errors. In this paper a rigorous treatment of these problems is presented. Different approximation formulas have been proposed to correct errors due to excess path length in addition to the free space path length, TEC difference at two GNSS frequencies, and third-order ionospheric term. The GPS dual-frequency residual range errors can be corrected within millimeter level accuracy using the proposed correction formulas.

Hoque, M. Mainul; Jakowski, N.

2008-10-01

109

The Entropy in Learning Theory. Error Estimates

We continue the investigation of some problems in learning theory in the setting formulated by F. Cucker and S. Smale. The\\u000a goal is to find an estimator \\u000a \\u000a on the base of given data \\u000a \\u000a that approximates well the regression function \\u000a \\u000a of an unknown Borel probability measure \\u000a \\u000a defined on \\u000a \\u000a We assume that \\u000a \\u000a belongs to a function class \\u000a \\u000a It is known from previous

S. V. Konyagin; V. N. Temlyakov

2007-01-01

110

Analysis of the Systematic Errors Found in the Kipp & Zonen Large-Aperture Scintillometer

NASA Astrophysics Data System (ADS)

Studies have shown a systematic error in the Kipp & Zonen large-aperture scintillometer (K&ZLAS) measurements of the sensible heat flux, H. We improved on these studies and compared four K&ZLASs with a Wageningen large-aperture scintillometer at the Chilbolton Observatory. The scintillometers were installed such that their footprints were the same and independent flux measurements were made along the measurement path. This allowed us to compare H and the direct scintillometer output, the refractive index structure parameter, {Cn2} . Furthermore, spectral analysis was performed on the raw scintillometer signal to investigate the characteristics of the error. Firstly, correlation coefficients ? 0.99 confirm the robustness of the scintillometer method, and secondly we discovered two systematic errors: the low-{Cn2} error and the high-{Cn2} error. The low-{Cn2} error is a non-linear error that is caused by high-frequency noise, and we suspect the error to be caused by the calibration circuit in the receiver. It varies between each K&ZLAS, is significant for H ? 50 W m-2, and we propose a solution to remove this error using the demodulated signal. The high-{Cn2} error identified by us is the systematic error found in previous studies. We suspect this error to be caused by poor focal alignment of the receiver detector and the transmitter light-emitting diode that causes ineffective use of the Fresnel lens in the current Kipp & Zonen design. It varies between each K&ZLAS (35% up to 240%) and can only be removed by comparing with a reference scintillometer in the field.

van Kesteren, B.; Hartogensis, O. K.

2011-03-01

111

Analysis of systematic error in “bead method” measurements of meteorite bulk volume and density

NASA Astrophysics Data System (ADS)

The Archimedean glass bead method for determining meteorite bulk density has become widely applied. We used well characterized, zero-porosity quartz and topaz samples to determine the systematic error in the glass bead method to support bulk density measurements of meteorites for our ongoing meteorite survey. Systematic error varies according to bead size, container size and settling method, but in all cases is less than 3%, and generally less than 2%. While measurements using larger containers (above 150 cm 3) exhibit no discernible systematic error but much reduced precision, higher precision measurements with smaller containers do exhibit systematic error. For a 77 cm 3 container using 40-80 ?m diameter beads, the systematic error is effectively eliminated within measurement uncertainties when a "secured shake" settling method is employed in which the container is held securely to the shake platform during a 5 s period of vigorous shaking. For larger 700-800 ?m diameter beads using the same method, bulk volumes are uniformly overestimated by 2%. Other settling methods exhibit sample-volume-dependent biases. For all methods, reliability of measurement is severely reduced for samples below ˜5 cm 3 (10-15 g for typical meteorites), providing a lower-limit selection criterion for measurement of meteoritical samples.

Macke S. J., Robert J.; Britt, Daniel T.; Consolmagno S. J., Guy J.

2010-02-01

112

Some A Posteriori Error Estimators for Elliptic Partial Differential Equations

. We present three new a posteriori error estimators in the energynorm for finite element solutions to elliptic partial differential equations. Theestimators are based on solving local Neumann problems in each element. Theestimators differ in how they enforce consistency of the Neumann problems.We prove that as the mesh size decreases, under suitable assumptions, two ofthe estimators approach upper bounds on

Randolph E. Bank; Alan Weiser

1985-01-01

113

Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

ERIC Educational Resources Information Center

Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

Sass, Daniel A.

2010-01-01

114

Adaptive Error Estimation in Linearized Ocean General Circulation Models

NASA Technical Reports Server (NTRS)

Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

Chechelnitsky, Michael Y.

1999-01-01

115

Drug treatment of inborn errors of metabolism: a systematic review

Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field. PMID:23532493

Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed

2013-01-01

116

B. Achchab et al. Star-based a posteriori error estimators Star-based a posteriori error estimates

@univ-lyon1.fr, naima.debit@univ-lyon1.fr Abstract. We give an a posteriori error estimator for nonconforming to residual estimators. This approach is applied in 1Corresponding author. E-mail: naima.debit@univ-lyon1.fr 1

Paris-Sud XI, UniversitÃ© de

117

Variance estimation for systematic designs in spatial surveys.

In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena?(Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940

Fewster, R M

2011-12-01

118

Effects of averaging over motion and the resulting systematic errors in radiation therapy.

The potential for systematic errors in radiotherapy of a breathing patient is considered using the statistical model of Bortfeld et al (2002 Phys. Med. Biol. 47 2203-20). It is shown that although averaging over 30 fractions does result in a narrow Gaussian distribution of errors, as predicted by the central limit theorem, the fact that one or a few samples of the breathing patient's motion distribution are used for treatment planning (in contrast to the many treatment fractions that are likely to be delivered) may result in a much larger error with a systematic component. The error distribution may be particularly large if a scan at breath-hold is used for planning. PMID:16357424

Evans, Philip M; Coolens, Catherine; Nioutsikou, Elena

2006-01-01

119

Geodynamo model and error parameter estimation using geomagnetic data assimilation

NASA Astrophysics Data System (ADS)

We have developed a new geomagnetic data assimilation approach which uses the minimum variance' estimate for the analysis state, and which models both the forecast (or model output) and observation errors using an empirical approach and parameter tuning. This system is used in a series of assimilation experiments using Gauss coefficients (hereafter referred to as observational data) from the GUFM1 and CM4 field models for the years 1590-1990. We show that this assimilation system could be used to improve our knowledge of model parameters, model errors and the dynamical consistency of observation errors, by comparing forecasts of the magnetic field with the observations every 20 yr. Statistics of differences between observation and forecast (O - F) are used to determine how forecast accuracy depends on the Rayleigh number, forecast error correlation length scale and an observation error scale factor. Experiments have been carried out which demonstrate that a Rayleigh number of 30 times the critical Rayleigh number produces better geomagnetic forecasts than lower values, with an Ekman number of E = 1.25 × 10-6, which produces a modified magnetic Reynolds number within the parameter domain with an `Earth like' geodynamo. The optimal forecast error correlation length scale is found to be around 90 per cent of the thickness of the outer core, indicating a significant bias in the forecasts. Geomagnetic forecasts are also found to be highly sensitive to estimates of modelled observation errors: Errors that are too small do not lead to the gradual reduction in forecast error with time that is generally expected in a data assimilation system while observation errors that are too large lead to model divergence. Finally, we show that assimilation of L ? 3 (or large scale) gauss coefficients can help to improve forecasts of the L > 5 (smaller scale) coefficients, and that these improvements are the result of corrections to the velocity field in the geodynamo model.

Tangborn, Andrew; Kuang, Weijia

2015-01-01

120

Verification of unfold error estimates in the unfold operator code

NASA Astrophysics Data System (ADS)

Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.

Fehl, D. L.; Biggs, F.

1997-01-01

121

Verification of unfold error estimates in the unfold operator code

Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

1997-01-01

122

Error estimates for CCMP ocean surface wind data sets

NASA Astrophysics Data System (ADS)

The cross-calibrated, multi-platform (CCMP) ocean surface wind data sets are now available at the Physical Oceanography Distributed Active Archive Center from July 1987 through December 2010. These data support wide-ranging air-sea research and applications. The main Level 3.0 data set has global ocean coverage (within 78S-78N) with 25-kilometer resolution every 6 hours. An enhanced variational analysis method (VAM) quality controls and optimally combines multiple input data sources to create the Level 3.0 data set. Data included are all available RSS DISCOVER wind observations, in situ buoys and ships, and ECMWF analyses. The VAM is set up to use the ECMWF analyses to fill in areas of no data and to provide an initial estimate of wind direction. As described in an article in the Feb. 2011 BAMS, when compared to conventional analyses and reanalyses, the CCMP winds are significantly different in some synoptic cases, result in different storm statistics, and provide enhanced high-spatial resolution time averages of ocean surface wind. We plan enhancements to produce estimated uncertainties for the CCMP data. We will apply the method of Desroziers et al. for the diagnosis of error statistics in observation space to the VAM O-B, O-A, and B-A increments. To isolate particular error statistics we will stratify the results by which individual instruments were used to create the increments. Then we will use cross-validation studies to estimate other error statistics. For example, comparisons in regions of overlap for VAM analyses based on SSMI and QuikSCAT separately and together will enable estimating the VAM directional error when using SSMI alone. Level 3.0 error estimates will enable construction of error estimates for the time averaged data sets.

Atlas, R. M.; Hoffman, R. N.; Ardizzone, J.; Leidner, S.; Jusem, J.; Smith, D. K.; Gombos, D.

2011-12-01

123

The impact of orbital errors on the estimation of satellite clock errors and PPP

NASA Astrophysics Data System (ADS)

Precise satellite orbit and clocks are essential for providing high accuracy real-time PPP (Precise Point Positioning) service. However, by treating the predicted orbits as fixed, the orbital errors may be partially assimilated by the estimated satellite clock and hence impact the positioning solutions. This paper presents the impact analysis of errors in radial and tangential orbital components on the estimation of satellite clocks and PPP through theoretical study and experimental evaluation. The relationship between the compensation of the orbital errors by the satellite clocks and the satellite-station geometry is discussed in details. Based on the satellite clocks estimated with regional station networks of different sizes (?100, ?300, ?500 and ?700 km in radius), results indicated that the orbital errors compensated by the satellite clock estimates reduce as the size of the network increases. An interesting regional PPP mode based on the broadcast ephemeris and the corresponding estimated satellite clocks is proposed and evaluated through the numerical study. The impact of orbital errors in the broadcast ephemeris has shown to be negligible for PPP users in a regional network of a radius of ?300 km, with positioning RMS of about 1.4, 1.4 and 3.7 cm for east, north and up component in the post-mission kinematic mode, comparable with 1.3, 1.3 and 3.6 cm using the precise orbits and the corresponding estimated clocks. Compared with the DGPS and RTK positioning, only the estimated satellite clocks are needed to be disseminated to PPP users for this approach. It can significantly alleviate the communication burdens and therefore can be beneficial to the real time applications.

Lou, Yidong; Zhang, Weixing; Wang, Charles; Yao, Xiuguang; Shi, Chuang; Liu, Jingnan

2014-10-01

124

A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

NASA Technical Reports Server (NTRS)

A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

Simon, Donald L.; Garg, Sanjay

2010-01-01

125

Error propagation and scaling for tropical forest biomass estimates.

The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093

Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

2004-01-01

126

Real-Time Estimation Of Aiming Error Of Spinning Antenna

NASA Technical Reports Server (NTRS)

Spinning-spacecraft dynamics and amplitude variations in communications links studied from received-signal fluctuations. Mathematical model and associated analysis procedure provide real-time estimates of aiming error of remote rotating transmitting antenna radiating constant power in narrow, pencillike beam from spinning platform, and current amplitude of received signal. Estimates useful in analyzing and enhancing calibration of communication system, and in analyzing complicated dynamic effects in spinning platform and antenna-aiming mechanism.

Dolinsky, Shlomo

1992-01-01

127

Systematic error sources in a measurement of G using a cryogenic torsion pendulum

This dissertation attempts to explore and quantify systematic errors that arise in a measurement of G (the gravitational constant from Newton's Law of Gravitation) using a cryogenic torsion pendulum. It begins by exploring the techniques frequently used to measure G with a torsion pendulum, features of the particular method used at UC Irvine, and the motivations behind those features. It

William Daniel Cross

2009-01-01

128

The objective of this systematic review is to analyse the relative risk reduction on medication error and adverse drug events (ADE) by computerized physician order entry systems (CPOE). We included controlled field studies and pretest-posttest studies, evaluating all types of CPOE systems, drugs and clinical settings. We present the results in evidence tables, calculate the risk ratio with 95% confidence

ELSKE AMMENWERTH; PETRA SCHNELL-INDERST; CHRISTOF MACHAN; UWE SIEBERT

2008-01-01

129

A systematic literature review to identify and classify software requirement errors

Most software quality research has focused on identifying faults (i.e., information is incorrectly recorded in an artifact). Because software still exhibits incorrect behavior, a different approach is needed. This paper presents a systematic literature review to develop taxonomy of errors (i.e., the sources of faults) that may occur during the requirements phase of software lifecycle. This taxonomy is designed to

Gursimran Singh Walia; Jeffrey C. Carver

2009-01-01

130

Barcode Medication Administration System (BCMA) Errors A Systematic Review Rupa Mitra1

Barcode Medication Administration System (BCMA) Errors Â A Systematic Review Rupa Mitra1 1 IU School of Informatics Implementation of Barcode Medication Administration (BCMA) improves the accuracy of administration of medication. These systems can improve medication safety by ensuring that correct medication

Zhou, Yaoqi

131

Test models for improving filtering with model errors through stochastic parameter estimation

The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

Gershgorin, B. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States); Harlim, J. [Department of Mathematics, North Carolina State University, NC 27695 (United States)], E-mail: jharlim@ncsu.edu; Majda, A.J. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States)

2010-01-01

132

NASA Astrophysics Data System (ADS)

Measurements of column CO2 concentration from space are now being taken at a spatial and temporal density that permits regional CO2 sources and sinks to be estimated. Systematic errors in the satellite retrievals must be minimized for these estimates to be useful, however. CO2 retrievals from the TANSO instrument aboard the GOSAT satellite are compared to similar column retrievals from the Total Carbon Column Observing Network (TCCON) as the primary method of validation; while this is a powerful approach, it can only be done for overflights of 10-20 locations and has not, for example, permitted validation of GOSAT data over the oceans or deserts. Here we present a complementary approach that uses a global atmospheric transport model and flux inversion method to compare different types of CO2 measurements (GOSAT, TCCON, surface in situ, and aircraft) at different locations, at the cost of added transport error. The measurements from any single type of data are used in a variational carbon data assimilation method to optimize surface CO2 fluxes (with a CarbonTracker prior), then the corresponding optimized CO2 concentration fields are compared to those data types not inverted, using the appropriate vertical weighting. With this approach, we find that GOSAT column CO2 retrievals from the ACOS project (version 2.9 and 2.10) contain systematic errors that make the modeled fit to the independent data worse. However, we find that the differences between the GOSAT data and our prior model are correlated with certain physical variables (aerosol amount, surface albedo, correction to total column mass) that are likely driving errors in the retrievals, independent of CO2 concentration. If we correct the GOSAT data using a fit to these variables, then we find the GOSAT data to improve the fit to independent CO2 data, which suggests that the useful information in the measurements outweighs the negative impact of the remaining systematic errors. With this assurance, we compare the flux estimates given by assimilating the ACOS GOSAT retrievals to similar ones given by NIES GOSAT column retrievals, bias-corrected in a similar manner. Finally, we have found systematic differences on the order of a half ppm between column CO2 integrals from 18 TCCON sites and those given by assimilating NOAA in situ data (both surface and aircraft profile) in this approach. We assess how these differences change in switching to a newer version of the TCCON retrieval software.

Baker, D. F.; Oda, T.; O'Dell, C.; Wunch, D.; Jacobson, A. R.; Yoshida, Y.; Partners, T.

2012-12-01

133

ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

NASA Technical Reports Server (NTRS)

The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and range rate. The observation errors considered are bias, timing, transit time, tracking station location, polar motion, solid earth tidal displacement, ocean loading displacement, tropospheric and ionospheric refraction, and space plasma. The force model elements considered are the earth's potential, the gravitational constant, solid earth tides, polar radiation pressure, earth reflected radiation, atmospheric drag, and thrust errors. The errors are propagated along the satellite orbital path. The ORAN program is written in FORTRAN IV and ASSEMBLER for batch execution and has been implemented on an IBM 360 series computer with a central memory requirement of approximately 570K of 8-bit bytes. The ORAN program was developed in 1973 and was last updated in 1980.

Putney, B.

1994-01-01

134

Estimating Filtering Errors Using the Peano Kernel Theorem

The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

Jerome Blair

2008-03-01

135

Estimating Filtering Errors Using the Peano Kernel Theorem

The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

Jerome Blair

2009-02-20

136

Concise Formulas for the Standard Errors of Component Loading Estimates.

ERIC Educational Resources Information Center

Derived formulas for the asymptotic standard errors of component loading estimates to cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. Used the formulas with a real correlation matrix of 355 subjects who took 12 psychological tests. (SLD)

Ogasawara, Haruhiko

2002-01-01

137

ON THE ACCURACY OF MULTIGRID TRUNCATION ERROR ESTIMATES \\Lambda

Â difference discretizations, this formulation could also describe finite element or other discretizations---COMMENTS AND SUGGESTIONS ARE WELCOME Abstract. In solving boundaryÂvalue problems, multigrid methods can provide computable on the work of Schaffer leads to accurate truncation error estimates without these restrictions. Numerical

138

Note: statistical errors estimation for Thomson scattering diagnostics.

A practical way of estimating statistical errors of a Thomson scattering diagnostic measuring plasma electron temperature and density is described. Analytically derived expressions are successfully tested with Monte Carlo simulations and implemented in an automatic data processing code of the JET LIDAR diagnostic. PMID:23025622

Maslov, M; Beurskens, M N A; Flanagan, J; Kempenaars, M

2012-09-01

139

MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

MULTIÂTARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN # Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coe subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic computations like compressible

Hartmann, Ralf

140

MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS

MULTITARGET ERROR ESTIMATION AND ADAPTIVITY IN AERODYNAMIC FLOW SIMULATIONS RALF HARTMANN Abstract. Important quantities in aerodynamic flow simulations are the aerodynamic force coefficients including Navier-Stokes equations AMS subject classifications. 65N12,65N15,65N30 1. Introduction. In aerodynamic

Hartmann, Ralf

141

Condition and Error Estimates in Numerical Matrix Computations

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

Konstantinov, M. M. [University of Architecture, Civil Engineering and Geodesy, 1046 Sofia (Bulgaria); Petkov, P. H. [Technical University of Sofia, 1000 Sofia (Bulgaria)

2008-10-30

142

Background error covariance estimation for atmospheric CO2 data assimilation

NASA Astrophysics Data System (ADS)

any data assimilation framework, the background error covariance statistics play the critical role of filtering the observed information and determining the quality of the analysis. For atmospheric CO2 data assimilation, however, the background errors cannot be prescribed via traditional forecast or ensemble-based techniques as these fail to account for the uncertainties in the carbon emissions and uptake, or for the errors associated with the CO2 transport model. We propose an approach where the differences between two modeled CO2 concentration fields, based on different but plausible CO2 flux distributions and atmospheric transport models, are used as a proxy for the statistics of the background errors. The resulting error statistics: (1) vary regionally and seasonally to better capture the uncertainty in the background CO2 field, and (2) have a positive impact on the analysis estimates by allowing observations to adjust predictions over large areas. A state-of-the-art four-dimensional variational (4D-VAR) system developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) is used to illustrate the impact of the proposed approach for characterizing background error statistics on atmospheric CO2 concentration estimates. Observations from the Greenhouse gases Observing SATellite "IBUKI" (GOSAT) are assimilated into the ECMWF 4D-VAR system along with meteorological variables, using both the new error statistics and those based on a traditional forecast-based technique. Evaluation of the four-dimensional CO2 fields against independent CO2 observations confirms that the performance of the data assimilation system improves substantially in the summer, when significant variability and uncertainty in the fluxes are present.

Chatterjee, Abhishek; Engelen, Richard J.; Kawa, Stephan R.; Sweeney, Colm; Michalak, Anna M.

2013-09-01

143

Discretization error estimation and exact solution generation using the method of nearby problems.

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)

2011-10-01

144

Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2009-12-16

145

On the correspondence between short- and long-timescale systematic errors in the TAMIP and AMIP

NASA Astrophysics Data System (ADS)

The correspondence between short- and long-term systematic errors in climate models from the transpose-AMIP (TAMIP, short-term hindcasts) and AMIP (long-term free running) archives is systematically examined with a focus on the precipitation, clouds and radiation. The data from TAMIP is based on 16 5-day hindcast ensembles from the tamip200907 experiment during YOTC, and the data from AMIP is based on the July-August mean of 1979-2008. Our results suggest that most systematic errors apparent in the long-term climate runs, particularly those associated with moist processes, also appear in the hindcasts in all the climate models (CAM4, CAM5, CNRM5, HadGEN2-A, IPSL, and MIROC5). The errors, especially in CAM4/5, and MIROC5, grow with the hindcast lead time and typically saturate after few days of hindcasts with amplitudes comparable to the climate errors. Examples are excessive precipitation in much of the tropics and overestimate of net shortwave absorbed radiation in the stratocumulus cloud decks over the eastern subtropical ocean and the Southern Ocean at about 60°S. This suggests that these systematic errors likely resulted from model parameterizations since large-scale flows remain close to observations in the first few days of the hindcasts. We will also discuss possible issues of initial spin-up and ensemble members for hindcast experiments in this presentation. (This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.)

Ma, H.; Xie, S.; Boyle, J. S.; Klein, S. A.

2012-12-01

146

Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

NASA Technical Reports Server (NTRS)

Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

2002-01-01

147

MEAN SQUARED ERROR ESTIMATION FOR SMALL AREAS WHEN THE SMALL AREA VARIANCES ARE ESTIMATED

-Paul Rivest and Nathalie Vandal, UniversitÂ´e Laval L.-P. Rivest, DÂ´epartement de mathÂ´ematiques et de approach uses the conditional mean squared error estimator of Rivest and Belmonte (2000) as an intermediate

Rivest, Louis-Paul

148

NASA Technical Reports Server (NTRS)

Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

149

Frequency error estimation for frequency comb multiple access demodulation

NASA Astrophysics Data System (ADS)

Frequency comb multiple access (FCMA) has been suggested as an efficient means of multiple access for satellite communication networks. The authors suggest a new algorithm for rapid FCMA frequency error estimation with excellent performance at low E(sub b)/N(sub o) and heavy interchannel interference. This frequency correction must be achieved before acquisition of timing and phase. The performance of the algorithm is also given.

Reisenfeld, S.; Kumar, A.

1994-04-01

150

Systematic error sources in a measurement of G using a cryogenic torsion pendulum

NASA Astrophysics Data System (ADS)

This dissertation attempts to explore and quantify systematic errors that arise in a measurement of G (the gravitational constant from Newton's Law of Gravitation) using a cryogenic torsion pendulum. It begins by exploring the techniques frequently used to measure G with a torsion pendulum, features of the particular method used at UC Irvine, and the motivations behind those features. It proceeds to describe the particular apparatus used in the UCI G measurement, and the formalism involved in a gravitational torsion pendulum experiment. It then describes and quantifies the systematic errors that have arisen, particularly those that arise from the torsion fiber and from the influence of ambient background gravitational, electrostatic, and magnetic fields. The dissertation concludes by presenting the value of G that the lab has reported.

Cross, William Daniel

151

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

Beckerman, M.; Oblow, E.M.

1988-04-01

152

NASA Technical Reports Server (NTRS)

The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

Larson, T. J.; Ehernberger, L. J.

1985-01-01

153

We consider the errors introduced by speckle pattern statistics of a diffusing target in the measurement of large displacements made with a self-mixing interferometer (SMI), with sub-? resolution and a range up to meters. As the source on the target side, we assume a diffuser with randomly distributed roughness. Two cases are considered: (i) a developing randomness in z-height profile, with standard deviation ?(z), increasing from ?? to ?? and uncorrelated spatially (x,y), and (ii) a fully developed z-height randomness (?(z)??) but spatially correlated with various correlation sizes ?(x,y). We find that systematic and random errors of all types of diffusers converge to that of a uniformly illuminated diffuser, independent of the actual profile of radiant emittance and phase distribution, when the standard deviation ?(z) is increased or the scale of correlation ?(x,y) is decreased. This convergence is a sign of speckle statistics development, as all distributions end up with the same errors of the fully developed diffuser. Convergence is earlier for a Gaussian-distributed amplitude than for other spot distributions. As an application of simulation results, we plot systematic and random errors of SMI measurements of displacement versus distance, for different source distributions standard deviations and correlations, both for intra- and inter-speckle displacements. PMID:25090316

Donati, Silvano; Martini, Giuseppe

2014-08-01

154

NASA Astrophysics Data System (ADS)

We present the results of recent work seeking to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we show that if the uncertainties on the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters will be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We show that template incompleteness, a major cause of inaccuracy in this process, is ``flagged" by a large fraction of outliers in redshift and that it can be corrected by using more flexible stellar population models. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multidimensional probability distribution function in SED fitting + z parameter space, including all correlations.

Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric J.

2015-01-01

155

Estimating the coverage of mental health programmes: a systematic review

Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874

De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

2014-01-01

156

Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

NASA Astrophysics Data System (ADS)

Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more successful CGCM performance, as the effects of remote biases may override them. Therefore, efforts to reduce CGCM errors cannot be narrowly focused on particular regions.

Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

2014-05-01

157

Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

ERIC Educational Resources Information Center

The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

Hoshino, Takahiro; Shigemasu, Kazuo

2008-01-01

158

NASA Astrophysics Data System (ADS)

Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

Coakley, K. J.; Dewey, M. S.; Yue, A. T.; Laptev, A. B.

2009-12-01

159

Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

NASA Technical Reports Server (NTRS)

Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

2012-01-01

160

A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series

NASA Astrophysics Data System (ADS)

In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team

2011-01-01

161

Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

NASA Technical Reports Server (NTRS)

This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

2001-01-01

162

Improved Soundings and Error Estimates using AIRS/AMSU Data

NASA Technical Reports Server (NTRS)

AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

Susskind, Joel

2006-01-01

163

Verification of unfold error estimates in the UFO code

Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

Fehl, D.L.; Biggs, F.

1996-07-01

164

Letter: A Reassessment of the Systematic Gravitational Error in the LARES Mission

NASA Astrophysics Data System (ADS)

In this letter we reexamine the evaluation of the error due to the even zonal harmonics of the geopotential in some proposed tests of relativistic gravitomagnetism with existing and proposed laser-ranged LAGEOS-like satellites in the gravitational field of the Earth. It is particularly important because the error due to the even zonal harmonics of the geopotential is one of the major sources of systematic errors in this kind of measurements. A conservative, although maybe pessimistic, approach is followed by using the diagonal part only of the covariance matrix of the EGM96 Earth's gravity model up to degree l = 20. It turns out that, within this context and according to the present level of knowledge of the terrestrial gravitational field, the best choice would be the use of a recently proposed combination which involves the nodes ? of LAGEOS, LAGEOS II and LARES and the perigees ? of LAGEOS II and LARES. Indeed, it turns out that the unavoidable orbital injection errors in the inclination of LARES would not affect the gravitational error which would also be insensitive to the correlations among the even zonal harmonics of geopotential.

Iorio, Lorenzo

2003-07-01

165

Precision calibration and systematic error reduction in the long trace profiler

The long trace profiler (LTP) has become the instrument of choice for surface figure testing and slope error measurement of mirrors used for synchrotron radiation and x-ray astronomy optics. In order to achieve highly accurate measurements with the LTP, systematic errors need to be reduced by precise angle calibration and accurate focal plane position adjustment. A self-scanning method is presented to adjust the focal plane position of the detector with high precision by use of a pentaprism scanning technique. The focal plane position can be set to better than 0.25 mm for a 1250-mm-focal-length Fourier-transform lens using this technique. The use of a 0.03-arcsec-resolution theodolite combined with the sensitivity of the LTP detector system can be used to calibrate the angular linearity error very precisely. Some suggestions are introduced for reducing the system error. With these precision calibration techniques, accuracy in the measurement of figure and slope error on meter-long mirrors is now at a level of about 1 {mu}rad rms over the whole testing range of the LTP. (c) 2000 Society of Photo-Optical Instrumentation Engineers.

Qian, Shinan; Sostero, Giovanni [Sincrotrone Trieste, 34012 Basovizza, Trieste, (Italy)] [Sincrotrone Trieste, 34012 Basovizza, Trieste, (Italy); Takacs, Peter Z. [Brookhaven National Laboratory, Building 535B, Upton, New York 11973 (United States)] [Brookhaven National Laboratory, Building 535B, Upton, New York 11973 (United States)

2000-01-01

166

Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

NASA Technical Reports Server (NTRS)

Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

Morelli, Eugene a.

2006-01-01

167

A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

NASA Technical Reports Server (NTRS)

A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

Simon, Donald L.; Garg, Sanjay

2009-01-01

168

Minimizing critical layer systematic alignment errors during non-dedicated processing

NASA Astrophysics Data System (ADS)

For the 150 nm and smaller half-pitch geometries, many DRAM manufacturers frequently employ dedicated exposure tool strategy for processing of most critical layers. Individual die tolerances of less than 40 nm are not uncommon for such compact geometries and a method is needed to reduce systematic overlay errors. The dedication strategy relies on the premise that a component of the systematic error induced by the inefficiencies in the exposure tool encountered at a specific layer can be diminished by re-exposing subsequent layer(s) on the same tool thus canceling out a large component of this error. In the past this strategy has, in general, resulted in better overall alignment performance, better exposure tool modeling and in decreased residual modeling errors. Increased alignment performance due to dedication does not come without its price. In such a dedicated strategy wafers are committed to process on the same tool at subsequent lithographic layers thus decreasing manufacturing flexibility and in turn affecting cost through increased processing cycle time. Tool down-events and equipment upgrades requiring significant downtime can also have a significant negative impact on running of a factory. This paper presents volume results for the 140 nm and 110 nm half-pitch geometries using 248 nm and 193 nm respective exposure wavelength state-of-art systems that show that dedicated processing still produces superior overlay and device performance results when compared blindly against non-dedicated processing. Results are also shown that at a given time an acceptable match may be found producing near equivalent results for non-dedicated processing. Changes in alignment capability are also observed after major equipment maintenance and component replacement. A point-in-time predictor strategy utilizing residual modeling errors and a set of modified performance specifications is directly compared against measured overlay data after patterning, against within field AFOV measurements after etching of the pattern and to final device performance.

Jekauc, Igor; Roberts, William R.

2004-05-01

169

Systematic errors of mapping functions which are based on the VMF1 concept

NASA Astrophysics Data System (ADS)

Precise GNSS positioning requires an accurate Mapping Function (MF) to model the tropospheric delay. To date the most accurate MF is the Vienna Mapping Function 1 (VMF1). It utilizes data from a numerical weather model which is known for high predictive skill (Integrated Forecast System of the European Centre of Medium range Weather Forecast). Still, the VMF1, or any other MF which is based on the VMF1 concept, is a parameterized mapping approach and this means that it is tuned for specific elevation angles, station and orbital altitudes. In this study we analyse the systematic errors caused by such tuning on a global scale. We find that in particular the parameterization of the station altitude dependency is a major concern regarding airborne applications. For the moment we do not provide an improved parameterized mapping approach to mitigate systematic errors but instead we propose a rapid direct and therefore error-free mapping approach; the so-called Potsdam Mapping Factors (PMFs).

Zus, Florian; Dick, Galina; Dousa, Jan; Wickert, Jens

2014-05-01

170

Estimation and sample size calculations for correlated binary error rates of biometric in FARs and FRRs is the need to de- termine the sample size necessary to estimate a given error rate to within a specified margin of error,e. g. Snedecor and Cochran (1995). Sample size calcula- tions exist

Schuckers, Michael E.

171

A non-line-of-sight error mitigation algorithm in location estimation

The location estimation of mobile telephones is of great current interest. The two sources of range measurement errors in geolocation techniques are measuring error and non-line-of-sight (NLOS) error. The NLOS errors, derived from the blocking of direct paths, have been considered as a killer issue in the location estimation. In this paper we develop an algorithm to mitigate the NLOS

Pi-Chun Chen

1999-01-01

172

NASA Astrophysics Data System (ADS)

Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.

Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.

2011-12-01

173

Nonlocal treatment of systematic errors in the processing of sparse and incomplete sensor data

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse and incomplete sensor data. We present a detailed application of this methodology to the construction of navigation maps from wide-angle sonar sensor data acquired by the HERMIES IIB mobile robot. Our uncertainty approach is explcitly nonlocal. We use a binary labelling scheme and a simple logic for the rule of combination. We then correct erroneous interpretations of the data by analyzing pixel patterns of conflict and by imposing consistent labelling conditions. 9 refs., 6 figs.

Beckerman, M.; Oblow, E.M.

1988-03-01

174

NASA Technical Reports Server (NTRS)

Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

1999-01-01

175

Distortions of the extinction coefficient profile caused by systematic errors in lidar data.

The influence of lidar data systematic errors on the retrieved particulate extinction coefficient profile in clear atmospheres is investigated. Particularly, two sources of the extinction coefficient profile distortions are analyzed: (1) a zero-line offset remaining after subtraction of an inaccurately determined signal background component and (2) a far-end incomplete overlap due to poor adjustment of the lidar system optics. Inversion results for simulated lidar signals, obtained with the near- and far-end solutions, are presented that show advantages of the near-end solution for clear atmospheres. PMID:15176212

Kovalev, Vladimir A

2004-05-20

176

A PRIORI ERROR ESTIMATES FOR NUMERICAL METHODS FOR SCALAR CONSERVATION LAWS.

A PRIORI ERROR ESTIMATES FOR NUMERICAL METHODS FOR SCALAR CONSERVATION LAWS. PART III is the third of a series in which a general theory of a priori error estimates for scalar conservation laws. A priori error estimates, irregular grids, monotone schemes, conservation laws, supraconvergence AMS

177

High-quality hydrographic sections occupied during the World Ocean Circulation Experiment (WOCE) have allowed the first estimates to be made of property changes in the deep ocean on a decadal time-scale. The magnitude of the property variability on deep isothermal surfaces (below about 2–3°C) was found to be comparable with the magnitude of possible systematic errors in the data (except for

V. V Gouretski; K Jancke

2000-01-01

178

A-posteriori estimation and adaptive control of the error in the solution quantity of interest

The numbering of elements in the window (a) A and (b) B of the mesh shown in Fig, 2. 5a. 23 An example which illustrates the local error indicators: Distribution of (a) the total error, (b) the local error; the estimated local error using (c) ER2B3, (d) ZZ-SPR..., (e) HR for the window A. . . , . . . . . . . 24 An example which illustrates the local error indicators: Distribution of (a) the total error, (b) the local error; the estimated local error using (c) ER2B3, (d) ZZ-SPR, (e) HR for the window B...

Datta, Dibyendu Kumar, Dd 1973-

2012-06-07

179

NASA Astrophysics Data System (ADS)

Context. The wavefront aberrations due to optical surface errors in adaptive optics systems and science instruments can be a significant error source for high precision astrometry. Aims: This report derives formulas for evaluating these errors which may be useful in developing astrometry error budgets and optical surface quality specifications. Methods: A Fourier domain approach is used, and the errors on each optical surface are modeled as "phase screens" with stationary statistics at one or several conjugate ranges from the optical system pupil. Three classes of error are considered: (i) errors in initially calibrating the effects of static surface errors; (ii) the effects of beam translation, or "wander," across optical surfaces due to (for example) instrument boresighting error; and (iii) quasistatic surface errors which change from one observation to the next. Results: For each of these effects, we develop formulas describing the position estimation errors in a single observation of a science field, as well as the differential error between two separate observations. Sample numerical results are presented for the three classes of error, including some sample computations for the Thirty Meter Telescope and the NFIRAOS first-light adaptive optics system.

Ellerbroek, B.

2013-04-01

180

Limited resolution in chemistry transport models (CTMs) is necessarily associated with systematic errors in the calculated chemistry, due to the artificial mixing of species on the scale of the model grid (grid-averaging). Here, the errors in calculated hydroxyl radical (OH) concentrations and ozone production rates 3 are investigated quantitatively using both direct observations and model results. Photochemical steady-state models of

J. G. Esler; G. J. Roelofs; M. O. Köhler; F. M. O'Connor

2004-01-01

181

Production models are used in fisheries when only a time series of catch and abundance indices are available. Observation-error estimators are commonly used to fit the models to the data with a least squares type of objective function. An assumption associated with observation-error estimators is that errors occur only in the observed abundance index but not in the dynamics of

Y. Chen; N. Andrew

1998-01-01

182

Error estimation for CFD aeroheating prediction under rarefied flow condition

NASA Astrophysics Data System (ADS)

Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ? is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ?, compared with two other parameters, Kn? and Ma?Kn?.

Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

2014-12-01

183

Anisotropic discretization and model-error estimation in solid mechanics by local Neumann problems

First, a survey of existing residuum-based error-estimators and error-indicators is given. Generally, residual error estimators (which have at least upper bound in contrast to indicators) can be locally computed from residua of equilibrium and stress-jumps at element interfaces using Dirichlet or Neumann conditions for element patches or individual elements (REM). Another equivalent method for error estimation can be derived from

E. Stein; S. Ohnimus

1999-01-01

184

Effects of measurement error on horizontal hydraulic gradient estimates.

During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an approximately 500-m(2) site. Additional wells were installed to increase the monitored area to 26,500 m(2), and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of +/-1.3 x 10(-2) m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 x 10(-4)+/-25% with a flow direction of 56 degrees southeast +/-18 degrees, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis. PMID:17257340

Devlin, J F; McElwee, C D

2007-01-01

185

NASA Astrophysics Data System (ADS)

Scanning force microscopy (SFM) is capable of imaging surfaces with resolution on a nanometer scale. This method therefore plays an important role in characterizing radiation-induced defects in solids complementing methods like transmission electron microscopy, small-angle X-ray scattering and optical spectroscopy, to name a few. In particular, the SFM inspection of ionic single-crystals irradiated with energetic heavy ions revealed minute hillocks. The aim to determine the size and shape of these ion tracks as a function of parameters such as energy loss gives rise to critically analyze the interaction between SFM probe tip and sample in order to recognize and take into account systematic errors. Such errors originate especially from the finite size of the sensor tip. This work presents both an uncomplicated model of the SFM imaging process and its experimental verification allowing one to quantify the influence of the tip geometry on the recorded micrographs and correct the resulting data accordingly. For this purpose, a computer program was developed, which is able firstly to determine the tip geometry by means of the known geometry of a calibration standard. Secondly, using this tip geometry, the program reproduces the original sample topography containing the radiation damage structures under study. This is illustrated representatively for artificially generated images and also for a sample micrograph recorded on the surface of U-irradiated CaF 2 to prove the efficiency of the suggested procedures. Afterwards, an existing set of images showing the calibration standard 2D200 (NANOSENSORS) is used to classify the average tip shape. Due to the fact that no large variations in this shape occur, the procedure of imaging the calibration standard for each measurement can be replaced by using this average tip for reconstruction. The article concludes with the elimination of systematic errors in existing data sets of hillock diameters recorded on LiF, CaF 2 and LaF 3 after irradiation with swift heavy ions.

Müller, C.; Voss, K.-O.; Lang, M.; Neumann, R.

2003-12-01

186

A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations

An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.

Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

1999-11-03

187

On GPS Water Vapour estimation and related errors

NASA Astrophysics Data System (ADS)

Water vapour (WV) is one of the most important constituents of the atmosphere: it plays a crucial role in the earth's radiation budget in the absorption processes both of the incoming shortwave and the outgoing longwave radiation; it is one of the main greenhouse gases of the atmosphere, by far the one with higher concentration. In addition moisture and latent heat are transported through the WV phase, which is one of the driving factor of the weather dynamics, feeding the cloud systems evolution. An accurate, dense and frequent sampling of WV at different scales, is consequently of great importance for climatology and meteorology research as well as operational weather forecasting. Since the development of the satellite positioning systems, it has been clear that the troposphere and its WV content were a source of delay in the positioning signal, in other words a source of error in the positioning process or in turn a source of information in meteorology. The use of the GPS (Global Positioning System) signal for WV estimation has increased in recent years, starting from measurements collected from a ground-fixed dual frequency GPS geodetic station. This technique for processing the GPS data is based on measuring the signal travel time in the satellite-receiver path and then processing such signal to filter out all delay contributions except the tropospheric one. Once the troposheric delay is computed, the wet and dry part are decoupled under some hypotheses on the tropospheric structure and/or through ancillary information on pressure and temperature. The processing chain normally aims at producing a vertical Integrated Water Vapour (IWV) value. The other non troposheric delays are due to ionospheric free electrons, relativistic effects, multipath effects, transmitter and receiver instrumental biases, signal bending. The total effect is a delay in the signal travel time with respect to the geometrical straight path. The GPS signal has the advantage to be nearly costless and practically continuous (every second) with respect to the atmospheric dynamics. The spatial resolution is correlated to the number and spatial distance (i.e. density) of ground fixed stations and in principle can be very high (for sure it is increasing). The problem can reside in the errors made in the decoupling of the various delay components and in the approximation assumed for the computation of the IWV from the wet delay component. Such errors often are "masked" by the use of the available software packages for GPS data processing and, as a consequence, it is easier to find, associated to the final WV products, errors given from a posteriori validation processes rather than derived from rigorous error propagation analyses. In this work we want to present a technique to compute the different components necessary to retrieve WV measurements from the GPS signal, with a critical analysis of all approximations and errors made in the processing procedure also in perspectives of the great opportunity that the European GALILEO system will bring in this field too.

Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha

2010-05-01

188

Analysis of Modeling and Bias Errors in Discrete-Time State Estimation

This paper concerns the effects of modeling and bias errors in discrete-time state estimation. The newly derived algorithms include the effect of correlation between plant and measurement noise in the system. The effects of nonzero mean noise terms and bias errors are considered. With plant or measurement matrix errors, divergence can occur. The local or linear sensitivity approach to error

R. J. Brown; A. P. Sage

1971-01-01

189

NASA Astrophysics Data System (ADS)

Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, f?8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M?. We find that the systematic error of f?8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.

Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

2014-10-01

190

FINITE DIFFERENCE METHODS AND SPATIAL A POSTERIORI ERROR ESTIMATES FOR SOLVING PARABOLIC EQUATIONS on finite element methods. In this paper modified finite difference approximations are obtained for grids a posteriori error estimates of the spatial error are presented for the finite difference method

Moore, Peter K.

191

NASA Astrophysics Data System (ADS)

Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.

Hacker, Joshua; Lee, Jared; Lei, Lili

2014-05-01

192

Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment

NASA Astrophysics Data System (ADS)

The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

Stephenson, Edward J.; Storage Ring EDM Collaboration

2011-05-01

193

Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)] [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States)] [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada)] [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States)] [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States)] [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)] [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

2013-11-15

194

Systematic reduction of sign errors in many-body calculations of atoms and molecules

NASA Astrophysics Data System (ADS)

We apply the self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B 79 195117 (2009), ibid. 80 125110 (2009)] to the calculation of ground states of atoms and molecules. By comparing with configuration interaction results we show the method yields systematic convergence towards the exact ground state wave function and reduction of the fixed-node DMC sign error. We present results for atoms and light molecules, obtaining, e.g. the binding of N2 to chemical accuracy. Moreover, we demonstrate that the algorithm is robust enough to be used for the systems as large as the fullerene C20 starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the Fermion sign problem in electronic structure calculations. Research sponsored by the ORNL LDRD program (MB), U.S. DOE BES Divisions of Materials Sciences & Engineering (FAR, MLT) and Scientific User Facilities (PRCK). LLNL research was performed under U.S. DOE contract DE-AC52-07NA27344 (RQH).

Kent, P. R. C.; Bajdich, M.; Tiago, M. L.; Hood, R. Q.; Reboredo, F. A.

2010-03-01

195

Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

Aver, Erik [School of Physics and Astronomy, University of Minnesota, 116 Church St. SE, Minneapolis, MN 55455 (United States); Olive, Keith A. [William I. Fine Theoretical Physics Institute, University of Minnesota, 116 Church St. SE, Minneapolis, MN 55455 (United States); Skillman, Evan D., E-mail: aver@physics.umn.edu, E-mail: olive@umn.edu, E-mail: skillman@astro.umn.edu [Astronomy Department, University of Minnesota, 116 Church St. SE, Minneapolis, MN 55455 (United States)

2011-03-01

196

Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

NASA Astrophysics Data System (ADS)

We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, ?. The spurious deviations in ? produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in ? from quasar absorption lines.

Whitmore, Jonathan B.; Murphy, Michael T.

2015-02-01

197

Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

Boujraf, Saïd

2014-04-01

198

Uncertainty modeling of random and systematic errors by means of Monte Carlo and fuzzy techniques

NASA Astrophysics Data System (ADS)

The standard reference in uncertainty modeling is the “Guide to the Expression of Uncertainty in Measurement (GUM)”. GUM groups the occurring uncertain quantities into “Type A” and “Type B”. Uncertainties of “Type A” are determined with the classical statistical methods, while “Type B” is subject to other uncertainties which are obtained by experience and knowledge about an instrument or a measurement process. Both types of uncertainty can have random and systematic error components. Our study focuses on a detailed comparison of probability and fuzzy-random approaches for handling and propagating the different uncertainties, especially those of “Type B”. Whereas a probabilistic approach treats all uncertainties as having a random nature, the fuzzy technique distinguishes between random and deterministic errors. In the fuzzy-random approach the random components are modeled in a stochastic framework, and the deterministic uncertainties are treated by means of a range-of-values search problem. The applied procedure is outlined showing both the theory and a numerical example for the evaluation of uncertainties in an application for terrestrial laserscanning (TLS).

Alkhatib, Hamza; Neumann, Ingo; Kutterer, Hansjörg

2009-06-01

199

X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

2010-07-09

200

Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

Boujraf, Saïd

2014-01-01

201

An estimate of asthma prevalence in Africa: a systematic analysis

Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge. PMID:24382846

Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

2013-01-01

202

NASA Technical Reports Server (NTRS)

Residual errors in the Selder et al. (SSGP) map which caused a break in both the correlation factor (CF) and the filamentary appearance of the Shane-Wirtanen map are examined. These errors, causing a residual rms fluctuation of 11 percent in the SSGP-corrected counts and a systematic rms offset of 8 percent in the mean count per plate, can be attributed to counting pattern and plate vignetting. Techniques for CF reconstruction in catalogs affected by plate-related systematic biases are examined, and it is concluded that accurate restoration may not be possible. Surveys designed to measure the CF at the depth of the SW counts on a scale of 2.5 deg, must have systematic errors of less than or about 0.04 mag.

De Lapparent, V.; Kurtz, M. J.; Geller, M. J.

1986-01-01

203

Variable size block matching motion estimation with minimal error

NASA Astrophysics Data System (ADS)

We report two techniques for variable size block matching (VSBM) motion compensation. Firstly an algorithm is described which, based on a quad-tree structure, results in the optimal selection of variable-sized square blocks. It is applied in a VSBM scheme in which the total mean squared error is minimized. This provides the best-achievable performance for a quad- tree based VSBM technique. Although it is computationally demanding and hence impractical for real-time codecs, it does provide a yardstick by which the performance of other VSBM techniques can be measured. Secondly, a new VSBM algorithm which adopts a `bottom-up' approach is described. The technique starts by computing sets of `candidate' motion vectors for fixed-size small blocks. Blocks are then effectively merged in a quad-tree manner if they have similar motion vectors. The result is a computationally-efficient VSBM technique which attempts to estimate the `true' motion within the image. Both methods have been tested on a number of real image sequences. In all cases the new `bottom-up' technique was only marginally worse than the optimal VSBM method but significantly better than fixed-size block matching and other known VSBM implementations.

Martin, Graham R.; Packwood, Roger A.; Rhee, Injong

1996-03-01

204

Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation

We construct a prediction rule on the basis of some data, and then wish to estimate the error rate of this rule in classifying future observations. Cross-validation provides a nearly unbiased estimate, using only the original data. Cross-validation turns out to be related closely to the bootstrap estimate of the error rate. This article has two purposes: to understand better

Bradley Efron

1983-01-01

205

Power control for the additive white Gaussian noise channel under channel estimation errors

We investigate the time-varying additive white Gaussian noise channel with imperfect side-information. In practical systems, the channel gain may be estimated from a probing signal and estimation errors cannot be avoided. The goal of this paper is to determine a power allocation that a priori incorporates statistical knowledge of the estimation error. This is in contrast to prior work which

Thierry E. Klein; Robert G. Gallager

2001-01-01

206

ENERGY NORM A POSTERIORI ERROR ESTIMATION OF HP -ADAPTIVE DISCONTINUOUS GALERKIN METHODS://www.ima.umn.edu #12;ENERGY NORM A POSTERIORI ERROR ESTIMATION OF HP-ADAPTIVE DISCONTINUOUS GALERKIN METHODS of the proposed estimators within an automatic hp-adaptive refinement procedure. Key words. Discontinuous Galerkin

207

Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors

NASA Astrophysics Data System (ADS)

GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

2013-05-01

208

Simple error estimators for the Galerkin BEM for some hypersingular integral equation in 2D

A posteriori error estimation is an important tool for reliable and efficient Galerkin boundary element computations. For hypersingular integral equations in 2D with a positive-order Sobolev space, we analyse the mathematical relation between the (h???h\\/2)-error estimator from [S. Ferraz-Leite and D. Praetorius, Simple a posteriori error estimators for the h-version of the boundary element method, Computing 83 (2008), pp. 135–162],

C. Erath; S. Funken; P. Goldenits; D. Praetorius

2012-01-01

209

In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

Ju, Lili [University of South Carolina; Tian, Li [University of South Carolina; Wang, Desheng [Nanyang Technological University

2009-01-01

210

A Causal Model for Software Cost Estimating Error

Software cost estimation is an important concern for software managers and other software professionals. The hypothesized model in this research suggests that an organization's use of an estimate influences its estimating practices which influence both the basis of the estimating process and the accuracy of the estimate. The model also suggests that the estimating basis directly influences the accuracy of

Albert L. Lederer; Jayesh Prasad

1998-01-01

211

Limited resolution in chemistry transport mod- els (CTMs) is necessarily associated with systematic errors in the calculated chemistry, due to the artificial mixing of species on the scale of the model grid (grid-averaging). Here, the errors in calculated hydroxyl radical (OH) concentrations and ozone production rates P(O3) are investigated quantita- tively using both direct observations and model results. Pho- tochemical

J. G. Esler; G. J. Roelofs; M. O. Kohler; F. M. O’Connor

2004-01-01

212

NASA Technical Reports Server (NTRS)

This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

2002-01-01

213

GPS satellite clock error estimation for real time PPP and the assessment of position quality

NASA Astrophysics Data System (ADS)

Real time PPP method requires the availability of real time precise orbits and corrections or errors of the satellites clocks. Currently, it is possible to use the predicted IGU ephemerides available by the IGS centers. However, the satellites clocks corrections available in the IGU do not present enough accuracy (3 ns or 0.9 m) to accomplish real time PPP with centimeter accuracy. Therefore, it is necessary to develop appropriate methodologies for estimating the satellite clock corrections in real time with better quality. The estimation of satellite clock corrections can be carried out based on a GNSS network of reference and performing the adjustment in a combined PPP mode. Thus, all systematic effects involved with the GNSS satellite signals must be modeled appropriately for each station of the network. Once the corrections of the satellite clocks are estimated in real time, they should be sent to the users, which will use them for application in the GNSS data processing from a particular station also in real time PPP mode. To achieve such aim, a system composed by two software's, one for estimating the satellite clock corrections based on data from a GNSS network and the other for the realization of the real time PPP was developed. The results were generated in real time and post-processed mode (simulating real time). The estimate of the satellites clocks corrections was generated based on the measurements of the pseudorange smoothed by carrier phase and also using the original pseudorange and carrier phase with ambiguities estimation for each satellite available at each station. The daily accuracy of the estimated satellite clock corrections reached the order of 0.15 ns (0,05 m) and the application in the GNSS positioning shows that is possible now to accomplish real time PPP in the kinematic mode with accuracy of the order of 10 to 20 cm.

Galera Monico, J. F.; Marques, H. A.

2012-12-01

214

Gas hydrate estimation error associated with uncertainties of measurements and parameters

Downhole log measurements such as acoustic or electrical resistivity logs are often used to estimate in situ gas hydrate concentrations in sediment pore space. Estimation errors owing to uncertainties associated with downhole measurements and the parameters for estimation equations (weight in the acoustic method and Archie?s parameters in the resistivity method) are analyzed in order to assess the accuracy of estimation of gas hydrate concentration. Accurate downhole measurements are essential for accurate estimation of the gas hydrate concentrations in sediments, particularly at low gas hydrate concentrations and when using acoustic data. Estimation errors owing to measurement errors, except the slowness error, decrease as the gas hydrate concentration increases and as porosity increases. Estimation errors owing to uncertainty in the input parameters are small in the acoustic method and may be signifi cant in the resistivity method at low gas hydrate concentrations.

Lee, Myung W.; Collett, Timothy S.

2001-01-01

215

NASA Astrophysics Data System (ADS)

Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.

Güttler, I.

2012-04-01

216

NASA Astrophysics Data System (ADS)

We measure the long-term systematic component of the astrometric error in the GeMS MCAO system as a function of field radius and Ks magnitude. The experiment uses two epochs of observations of NGC 1851 separated by one month. The systematic component is estimated for each of three field of view cases (15'' radius, 30'' radius, and full field) and each of three distortion correction schemes: 8 DOF/chip + local distortion correction (LDC), 8 DOF/chip with no LDC, and 4 DOF/chip with no LDC. For bright, unsaturated stars with 13 < Ks < 16, the systematic component is < 0.2, 0.3, and 0.4 mas, respectively, for the 15'' radius, 30'' radius, and full field cases, provided that an 8 DOF/chip distortion correction with LDC (for the full-field case) is used to correct distortions. An 8 DOF/chip distortion-correction model always outperforms a 4 DOF/chip model, at all field positions and magnitudes and for all field-of-view cases, indicating the presence of high-order distortion changes. Given the order of the models needed to correct these distortions (~8 DOF/chip or 32 degrees of freedom total), it is expected that at least 25 stars per square arcminute would be needed to keep systematic errors at less than 0.3 milliarcseconds for multi-year programs. We also estimate the short-term astrometric precision of the newly upgraded Shane AO system with undithered M92 observations. Using a 6-parameter linear transformation to register images, the system delivers ~0.3 mas astrometric error over short-term observations of 2-3 minutes.

Ammons, S. M.; Neichel, Benoit; Lu, Jessica; Gavel, Donald T.; Srinath, Srikar; McGurk, Rosalie; Rudy, Alex; Rockosi, Connie; Marois, Christian; Macintosh, Bruce; Savransky, Dmitry; Galicher, Raphael; Bendek, Eduardo; Guyon, Olivier; Marin, Eduardo; Garrel, Vincent; Sivo, Gaetano

2014-08-01

217

NASA Astrophysics Data System (ADS)

Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05?rad to -0.4?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, Julia; Scherllin-Pirscher, Barbara; Foelsche, Ulrich

2013-04-01

218

NASA Astrophysics Data System (ADS)

Radio Occultation (RO) sensing is used to probe the Earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25 km to 30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001 to 2011. We could observe that the night time bending angle bias stays constant over the whole period of 11 yr, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of day time RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this model world we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

2013-02-01

219

NASA Astrophysics Data System (ADS)

Radio occultation (RO) sensing is used to probe the earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25-30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem, we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001-2011. We could observe that the nighttime bending angle bias stays constant over the whole period of 11 yr, while the daytime bias increases from low to high solar activity. As a result, the difference between nighttime and daytime bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of daytime RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this simulation we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

2013-08-01

220

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2013-01-01

221

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2014-01-01

222

A-posteriori error estimation and adaptivity for elastoplasticity using the reciprocal theorem

approximation property of the finite element method. As is meanwhile well known, for errors in other norms, Germany Abstract We present a-posteriori error estimators and adaptive methods for the finite element ap method is the introduction of duality techniques or in other notions the reciprocal theorem. For error

Cirak, Fehmi

223

Algorithms for Discrete Sequential Maximum Likelihood Bias Estimation and Associated Error Analysis

Optimization theory and discrete invariant imbedding is used in order to derive computationally efficient sequential algorithms for the maximum likelihood estimation of bias errors in linear discrete recursive filtering with noise corrupted input observations and correlated plant and measurement noise. Error analysis algorithms are derived for adaptive and nonadaptive systems with bias and modeling errors. Examples demonstrate the efficiency of

Jin L. Lin; Andrew P. Sage

1971-01-01

224

Cleaning up systematic error in eye-tracking data by using required fixation locations.

In the course of running an eye-tracking experiment, one computer system or subsystem typically presents the stimuli to the participant and records manual responses, and another collects the eye movement data, with little interaction between the two during the course of the experiment. This article demonstrates how the two systems can interact with each other to facilitate a richer set of experimental designs and applications and to produce more accurate eye tracking data. In an eye-tracking study, a participant is periodically instructed to look at specific screen locations, or explicit required fixation locations (RFLs), in order to calibrate the eye tracker to the participant. The design of an experimental procedure will also often produce a number of implicit RFIs--screen locations that the participant must look at within a certain window of time or at a certain moment in order to successfully and correctly accomplish a task, but without explicit instructions to fixate those locations. In these windows of time or at these moments, the disparity between the fixations recorded by the eye tracker and the screen locations corresponding to implicit RFLs can be examined, and the results of the comparison can be used for a variety of purposes. This article shows how the disparity can be used to monitor the deterioration in the accuracy of the eye tracker calibration and to automatically invoke a recalibration procedure when necessary. This article also demonstrates how the disparity will vary across screen regions and participants and how each participant's unique error signature can be used to reduce the systematic error in the eye movement data collected for that participant. PMID:12564562

Hornof, Anthony J; Halverson, Tim

2002-11-01

225

NASA Technical Reports Server (NTRS)

One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.

Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

2004-01-01

226

Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors

NASA Astrophysics Data System (ADS)

Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.

Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice

2014-03-01

227

A methodology has been developed for the treatment of systematic errors that arise in the processing of sparse sensor data. A detailed application of this methodology to the construction, from wide-angle sonar sensor data, of navigation maps for use in autonomous robotic navigation is presented. In the methodology, a four-valued labeling scheme and a simple logic for label combination are

M. Beckerman; E. M. Oblow

1990-01-01

228

A POSTERIORI ERROR ESTIMATES FOR THE CRANKNICOLSON METHOD

ÂNicolsonÂGalerkin reconstruction, a pos- teriori error analysis. The first author was partially supported by a `Pythagoras' grant Development Host Site, HPMD-CT-2001-00121 and the program Pythagoras of EPEAEK II. The third author

Akrivis, Georgios

229

Reliable random error estimation in the measurement of line-strength indices

We present a new set of accurate formulae for the computation of random errors in the measurement of atomic and molecular indices. The new expressions are in excellent agreement with numerical simulations. We have found that, in some cases, the use of approximated equations can give misleading line-strength index errors. It is important to note that accurate errors can only be achieved after a full control of the error propagation throughout the data reduction with a parallel processing of data and error frames. Finally, simple recipes for the estimation of the required signal-to-noise ratio to achieve a fixed index error are presented.

N. Cardiel; J. Gorgas; J. Cenarro; J. J. Gonzalez

1997-06-12

230

Aerial measurement error with a dot planimeter: Some experimental estimates

NASA Technical Reports Server (NTRS)

A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

Yuill, R. S.

1971-01-01

231

Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

NASA Astrophysics Data System (ADS)

The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

Imig, Astrid; Stephenson, Edward

2009-10-01

232

Goal-oriented error estimation based on equilibrated-flux reconstruction for finite element

Goal-oriented error estimation based on equilibrated-flux reconstruction for finite elementC 3A7, Canada Abstract We propose an approach for goal-oriented error estimation in finite element in the cases where a conforming finite element method, a dG method, or a mixed Raviart- Thomas method are used

Paris-Sud XI, UniversitÃ© de

233

A posteriori error estimate for the symmetric coupling of finite elements and boundary elements

In this note we study a posteriori error estimates for a model problem in the symmetric coupling of boundary element and finite elements methods. Emphasis is on the use of the Poincaré-Steklov operator and its discretization which are analyzed in general for both a priori and a posteriori error estimates. Combining arguments from [6] and [9, 10] we refine the

C. Carstensen

1996-01-01

234

Error Analysis for Silhouette--Based 3D Shape Estimation from Multiple Views

Error Analysis for Silhouette--Based 3D Shape Estimation from Multiple Views Wolfgang Niem Institut This paper presents an error analysis for 3D shape estimation techniques using silhouettes from multiÂ ple views (''shape--from--silhouette''). The results of this analysis are useful for the integration

235

Correcting for Measurement Error in Individual Ancestry Estimates in Structured Association Tests

We present theoretical explanations and show through simulation that the individual admixture proportion estimates obtained by using ancestry informative markers should be seen as an error- contaminated measurement of the underlying individual ancestry proportion. These estimates can be used in structured association tests as a control variable to limit type I error inflation or reduce loss of power due to

Jasmin Divers; Laura K. Vaughan; Miguel Padilla; José R. Fernandez; David B. Allison; David T. Redden

2007-01-01

236

On the structure of error estimates for finite-difference methods

In this paper we study in an abstract setting the structure of estimates for the global (accumulated) error in semilinear finite-difference methods. We derive error estimates, which are the most refined ones (in a sense specified precisely in this paper) that are possible for the difference methods considered. Applications and (numerical) examples are presented in the following fields: 1. Numerical

M. N. Spijker

1971-01-01

237

Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

Anderson, K.K.

1994-05-01

238

An hp-adaptivity and error estimation for hyperbolic conservation laws

NASA Technical Reports Server (NTRS)

This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

Bey, Kim S.

1995-01-01

239

NASA Technical Reports Server (NTRS)

Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

2011-01-01

240

The accuracy of the diagnosis obtained from a nuclear power plant fault-diagnostic advisor using neural networks is addressed in this paper in order to ensure the credibility of the diagnosis. A new error estimation scheme called error estimation by series association provides a measure of the accuracy associated with the advisor`s diagnoses. This error estimation is performed by a secondary neural network that is fed both the input features for and the outputs of the advisor. The error estimation by series association outperforms previous error estimation techniques in providing more accurate confidence information with considerably reduced computational requirements. The authors demonstrate the extensive usability of their method by applying it to a complicated transient recognition problem of 33 transient scenarios. The simulated transient data at different severities consists of 25 distinct transients for the Duane Arnold Energy Center nuclear power station ranging from a main steam line break to anticipated transient without scram (ATWS) conditions. The fault-diagnostic advisor system with the secondary error prediction network is tested on the transients at various severity levels and degraded noise conditions. The results show that the error estimation scheme provides a useful measure of the validity of the advisor`s output or diagnosis with considerable reduction in computational requirements over previous error estimation schemes.

Kim, K. [Korea Electric Power Research Inst., Taejon (Korea, Republic of). Automatic Control Group/Nuclear Control; Bartlett, E.B. [Iowa State Univ., Ames, IA (United States)

1996-08-01

241

Estimation of finite population parameters with auxiliary information and response error.

We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error. PMID:25089123

González, L M; Singer, J M; Stanek, E J

2014-10-01

242

DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS

We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

Giuppone, C. A.; Beauge, C. [Observatorio Astronomico, Universidad Nacional de Cordoba, Cordoba (Argentina); Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A. [Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Sao Paulo (Brazil)

2009-07-10

243

Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches

In corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms or across time, and OLS standard errors can be biased. Historically, researchers in the two literatures have used different solutions to this problem. This paper examines the different methods used in the literature and

Mitchell A. Petersen

2009-01-01

244

Mesoscale predictability and background error convariance estimation through ensemble forecasting

The "Fake-Dry" Experiment. 3. 4 Summary. 25 25 46 59 IV BACKGROUND ERROR COVARIANCE 64 4. 1 Introduction. 4. 2 Cross Covariance 4. 3 Correlation. . 64 66 76 vn CHAPTER Page 4. 4 Spatial Covariance 4. 5 Cross-Spatial Covariance 4. 6 Summary...

Ham, Joy L

2012-06-07

245

Error Estimates Derived from the Data for Least-Squares Spline Fitting

The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

Jerome Blair

2007-06-25

246

Efficient Small Area Estimation in the Presence of Measurement Error in Covariates

for the four estimators, yi, eYiS, bYiME, bYiSIMEX when the number of small areas is 100, measure- ment error variance Ci = 3 and 2v = 4. k is the percentage of areas having auxiliary information measured with error. : : : : : : : 52 2 Absolute value... of the bias for the four estimators,yi, eYiS, bYiME, bYiSIMEX when the number of small areas is 50, measurement error variance Ci = 2 and 2v = 4. k is the percentage of areas having auxiliary information measured with error. : : : : : : : : : : : : : : 53...

Singh, Trijya

2012-10-19

247

Space-Time Error Representation and Estimation in Navier-Stokes Calculations

NASA Technical Reports Server (NTRS)

The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

Barth, Timothy J.

2006-01-01

248

NASA Astrophysics Data System (ADS)

In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different Reynolds numbers. It is found that the velocity angle error estimator can detect most flow characteristics and produce dense grids in the regions where flow velocity directions have abrupt changes. In addition, the e theta estimator makes the derivative error dilutely distribute in the whole computational domain and also allows the refinement to be conducted at regions of high error. Through comparison of the velocity angle error across the interface with neighbouring cells, it is verified that the adaptive scheme in using etheta provides an optimum mesh which can clearly resolve local flow features in a precise way. The adaptive results justify the applicability of the etheta estimator and prove that this error estimator is a valuable adaptive indicator for the automatic refinement of unstructured grids.

Wu, Heng

2000-10-01

249

Estimation of dynamic alignment errors in shipboard firecontrol systems

A problem in fire control systems is the estimation of the relative alignment between remotely located target sensor and weapon coordinate frames. This paper describes and analyzes a shipboard system concept for estimating initial misalignments and compensating ships dynamic bending and flexure. The concept uses miniature strapdown inertial sensor assemblies at remote sensor\\/weapon stations to monitor instantaneous differences in rotational

B. H. Browne; D. H. Lackowski

1976-01-01

250

A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

NASA Technical Reports Server (NTRS)

This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

Barth, Timothy

2004-01-01

251

Error estimation and adaptive mesh refinement for parallel analysis of shell structures

NASA Technical Reports Server (NTRS)

The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

1994-01-01

252

On the accuracy of estimating pest insect abundance from data with random error

On the accuracy of estimating pest insect abundance from data with random error Nina Embleton of the estimate in order to have confidence about the management decision. Knowledge of the accuracy as the estimate becomes more accurate. Evaluation is based on the results of sampling and its accuracy depends

Petrovskaya, Natalia B.

253

Multiclass Bayes error estimation by a feature space sampling technique

NASA Technical Reports Server (NTRS)

A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

Mobasseri, B. G.; Mcgillem, C. D.

1979-01-01

254

The Laser Atmospheric Wind Sounder (LAWS) Preliminary Error Budget and Performance Estimate

NASA Technical Reports Server (NTRS)

The Laser Atmospheric Wind Sounder (LAWS) study phase has resulted in a preliminary error budget and an estimate of the instrument performance. This paper will present the line-of-sight (LOS) Velocity Measurement Error Budget, the instrument Boresight Error Budget, and the predicted signal-to-noise ratio (SNR) performance. The measurement requirements and a preliminary design for the LAWS instrument are presented in a companion paper.

Kenyon, David L.; Anderson, Kent

1992-01-01

255

A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

NASA Technical Reports Server (NTRS)

This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

Larson, Mats G.; Barth, Timothy J.

1999-01-01

256

Improved estimates of coordinate error for molecular replacement.

The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21,000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates. PMID:24189232

Oeffner, Robert D; Bunkóczi, Gábor; McCoy, Airlie J; Read, Randy J

2013-11-01

257

Error estimated driven anisotropic mesh refinement for three-dimensional diffusion simulation

We present a computational method for locally adapted conformal anisotropic tetrahedral mesh refinement. The element size is determined by an anisotropy function which is governed by an error estimation driven ruler according to an adjustable maximum error. Anisotropic structures are taken into account to reduce the amount of elements compared to strict isotropic refinement. The spatial resolution in three-dimensional unstructured

Wilfried Wessner; Clemens Heitzinger; A. Hossinger; S. Selberherr

2003-01-01

258

Efficient Interpolation-based Error Estimation for 1D Time-Dependent PDE Collocation Codes

based on B-spline collocation. BACOL generates the spatial error estimate by computing two global collocation solutions to the PDEs, one based on B-splines of degree p and the other on B-splines of degree p+1 equivalent to the error of the collocation solution. We implement these new schemes within a modified version

259

Errors in estimating tree age: implications for studies of stand dynamics

Errors in estimates of tree ages from increment cores can influence age-class distributions, affecting infer - ences about forest dynamics. We compare methods of height correction of increment cores taken above ground level by examining how resulting errors affect age-class distributions of ponderosa pine ( Pinus ponderosa Dougl. ex P. & C. Laws.) and Douglas-fir (Pseudotsuga menziesii var. glauca (Beissn.)

Carmen M. Wong; Ken P. Lertzman

2001-01-01

260

Errors and parameter estimation in precipitation-runoff modeling 2. Case study.

A case study is presented which illustrates some of the error analysis, sensitivity analysis, and parameter estimation procedures reviewed in the first part of this paper. It is shown that those procedures, most of which come from statistical nonlinear regression theory, are invaluable in interpreting errors in precipitation-runoff modeling and in identifying appropriate calibration strategies. -Author

Troutman, B.M.

1985-01-01

261

We propose a novel statistical method for estimating gene networks based on microarray gene expression data together with information from biological knowledge databases. Although a large amount of gene regulation information has already been stored in some biological databases, there are still errors and missing facts due to experimental problems and human errors. Therefore, we cannot blindly use them for

Seiya Imoto; Tomoyuki Higuchi; Takao Goto; Satoru Miyano

2006-01-01

262

Estimation of the Mutation Rate during Error-prone Polymerase Chain Reaction

Estimation of the Mutation Rate during Error-prone Polymerase Chain Reaction Dai Wang1 , Cheng-prone polymerase chain reaction (PCR) is widely used to introduce point mutations during in vitro evolution step of in vitro evolution is mutagenesis. Error-prone polymerase chain reaction (PCR) (Leung et al

Sun, Fengzhu - Sun, Fengzhu

263

SUMMARY In this paper, we first present a consistent procedure to establish influence functions for the finite element analysis of shell structures, where the influence function can be for any linear quantity of engineering interest. We then design some goal-oriented error measures that take into account the cancellation effect of errors over the domain to overcome the issue of over-estimation.

Thomas Grätsch; Klaus-Jürgen Bathe

2005-01-01

264

A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...

Locatelli, R.

265

Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

ERIC Educational Resources Information Center

Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

Olejnik, Stephen F.; Algina, James

1987-01-01

266

Fading MIMO Relay Channels with Channel Estimation Error Bengi Aygun Alkan Soysal

Fading MIMO Relay Channels with Channel Estimation Error Bengi AygÂ¨un Alkan Soysal Department.aygun@bahcesehir.edu.tr alkan.soysal@bahcesehir.edu.tr Abstract--In this paper, we consider a full-duplex, decode- and

Soysal, Alkan

267

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

268

The implicit, element residual method for a posteriori error estimation in FE-BI analysis

Hybrid finite element-boundary integral (FE-BI) formulations are widely used for electromagnetic analysis. In such analysis, estimation of the mesh-wide error distribution and global solution quality may be used to drive adaptive refinements. In this paper, an a posteriori error estimator based on the implicit, element residual method is constructed for the cavity FE-BI formulation, with dominant-mode coaxial port excitation. It

Matthys M. Botha; David B. Davidson

2006-01-01

269

Error Estimates in Horocycle Averages Asymptotics: Challenges from String Theory

There is an intriguing connection between the dynamics of the horocycle flow in the modular surface $SL_{2}(\\pmb{Z}) \\backslash SL_{2}(\\pmb{R})$ and the Riemann hypothesis. It appears in the error term for the asymptotic of the horocycle average of a modular function of rapid decay. We study whether similar results occur for a broader class of modular functions, including functions of polynomial growth, and of exponential growth at the cusp. Hints on their long horocycle average are derived by translating the horocycle flow dynamical problem in string theory language. Results are then proved by designing an unfolding trick involving a Theta series, related to the spectral Eisenstein series by Mellin integral transform. We discuss how the string theory point of view leads to an interesting open question, regarding the behavior of long horocycle averages of a certain class of automorphic forms of exponential growth at the cusp.

Matteo A. Cardella

2010-12-13

270

Gap filling strategies and error in estimating annual soil respiration

Technology Transfer Automated Retrieval System (TEKTRAN)

Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

271

The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

2012-01-01

272

This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.

Heath, G.

2012-06-01

273

NASA Astrophysics Data System (ADS)

Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 ?g l-1 for training; 106.13 ?g l-1 for validation) outperforms the BPNN (RMSE: 121.54 ?g l-1 for training; 143.37 ?g l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 ?g l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.

Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

2013-08-01

274

Statistical Error in a Chord Estimator of Correlation Dimension: the ``RULE of Five''

NASA Astrophysics Data System (ADS)

The statistical precision of a chord method for estimating fractal dimension from a correlation integral is derived. The optimal chord length is determined, and a comparison is made to other estimators. These calculations use the approximation that all pairwise distances between the points are statistically independent; the adequacy of this approximation is assessed numerically. The chord method provides a very quick and easy dimension estimate which is only slightly less precise than the optimal estimator. Keywords: correlation dimension, statistical error

Theiler, James; Lookman, Turab

1993-06-01

275

When calculating numerical solutions of the neutron transport equation it is important to have a measure of the accuracy of the solution. As the true solution is generally not known, a suitable estimation of the error must be made. The steady state transport equation possesses discretization errors in all its independent variables: angle, energy and space. In this work only spatial discretization errors are considered. An exact transport solution, in which the degree of regularity of the exact flux across the singular characteristic is controlled, is manufactured to determine the numerical solutions true discretization error. This solution is then projected onto a Legendre polynomial space in order to form an exact solution on the same basis space as the numerical solution, Discontinuous Galerkin Finite Element Method (DGFEM), to enable computation of the true error. Over a series of test problems the true error is compared to the error estimated by: Ragusa and Wang (RW), residual source (LER) and cell discontinuity estimators (JD). The validity and accuracy of the considered estimators are primarily assessed by considering the effectivity index and global L2 norm of the error. In general RW excels at approximating the true error distribution but usually under-estimates its magnitude; the LER estimator emulates the true error distribution but frequently over-estimates the magnitude of the true error; the JD estimator poorly captures the true error distribution and generally under-estimates the error about singular characteristics but over-estimates it elsewhere. (authors)

O'Brien, S.; Azmy, Y. Y. [North Carolina State University, Raleigh, NC 27695 (United States)

2013-07-01

276

Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval. PMID:24662512

Hubig, Michael; Muggenthaler, Holger; Mall, Gita

2014-05-01

277

Estimating biases and error variances through the comparison of coincident satellite measurements

NASA Astrophysics Data System (ADS)

A framework for the statistical comparison of six coincident remote sounding measurements is presented, which distinguishes between additive and multiplicative biases. The relationship between multiplicative bias and error variance is explored, and three methods are proposed for producing sets of values for three comparison variables: the multiplicative bias, and the error variance for each of two instruments. We illustrate and compare the three methods through the comparison of coincident measurements of the relatively long-lived stratospheric species O3, N2O, and HNO3 from two independent measurement sets: version 2.2 retrievals (with updated O3) from the Atmospheric Chemistry Experiment-Fourier transform spectrometer onboard SCISAT-1, and version 1.51 retrievals from the Earth Observing System Microwave Limb Sounder onboard Aura. We find that multiplicative bias between the two measurement sets, compared on a common vertical grid, is significant at some heights for O3 and N2O, and for all heights tested for HNO3. The most realistic estimates of measurement error are produced by a method which incorporates a third correlative data set into the analysis. Using this method, estimated error standard deviations (SDs) are comparable between the two instruments for O3 measurements, and are less than 10% of the mean measurement value between approximately 100 and 1 hPa. ACE N2O measurements are consistent with a 10% error SD at all heights tested, although the uncertainty of the estimates is large at heights above 5 hPa. Estimated MLS N2O error SDs are comparable with those for ACE in the lower stratosphere, but increase steeply with height. For HNO3, estimated error SDs are approximately 10% between 70 and 10 hPa for both instruments. At heights above 10 hPa and below 100 hPa, estimated ACE errors are significantly smaller than those for MLS.

Toohey, M.; Strong, K.

2007-07-01

278

The objective of this study was to evaluate and understand the systematic error between the planned three-dimensional (3D) dose and the delivered dose to patient in scanning beam proton therapy for lung tumors. Single-field and multi-field optimized scanning beam proton therapy plans were generated for 10 patients with stage II–III lung cancer with a mix of tumor motion and size. 3D doses in CT data sets for different respiratory phases and the time weighted average CT, as well as the four-dimensional (4D) doses were computed for both plans. The 3D and 4D dose differences for the targets and different organs at risk were compared using dose volume histogram (DVH) and voxel-based techniques and correlated with the extent of tumor motion. The gross tumor volume (GTV) dose was maintained in all 3D and 4D doses using the internal GTV override technique. The DVH and voxel-based techniques are highly correlated. The mean dose error and the standard deviation of dose error for all target volumes were both less than 1.5% for all but one patient. However, the point dose difference between the 3D and 4D doses was up to 6% for the GTV and greater than 10% for the clinical and planning target volumes. Changes in the 4D and 3D doses were not correlated with tumor motion. The planning technique (single-field or multi-field optimized) did not affect the observed systematic error. In conclusion, the dose error in 3D dose calculation varies from patient to patient and does not correlate with lung tumor motion. Therefore, patient-specific evaluation of the 4D dose is important for scanning beam proton therapy for lung tumors. PMID:25207565

Li, Heng; Liu, Wei; Park, Peter; Matney, Jason; Liao, Zhongxing; Chang, Joe; Zhang, Xiaodong; Li, Yupeng; Zhu, Ronald X

2014-01-01

279

The objective of this study was to evaluate and understand the systematic error between the planned three-dimensional (3D) dose and the delivered dose to patient in scanning beam proton therapy for lung tumors. Single-field and multifield optimized scanning beam proton therapy plans were generated for ten patients with stage II-III lung cancer with a mix of tumor motion and size. 3D doses in CT datasets for different respiratory phases and the time-weighted average CT, as well as the four-dimensional (4D) doses were computed for both plans. The 3D and 4D dose differences for the targets and different organs at risk were compared using dose-volume histogram (DVH) and voxel-based techniques, and correlated with the extent of tumor motion. The gross tumor volume (GTV) dose was maintained in all 3D and 4D doses, using the internal GTV override technique. The DVH and voxel-based techniques are highly correlated. The mean dose error and the standard deviation of dose error for all target volumes were both less than 1.5% for all but one patient. However, the point dose difference between the 3D and 4D doses was up to 6% for the GTV and greater than 10% for the clinical and planning target volumes. Changes in the 4D and 3D doses were not correlated with tumor motion. The planning technique (single-field or multifield optimized) did not affect the observed systematic error. In conclusion, the dose error in 3D dose calculation varies from patient to patient and does not correlate with lung tumor motion. Therefore, patient-specific evaluation of the 4D dose is important for scanning beam proton therapy for lung tumors. PMID:25207565

Li, Heng; Liu, Wei; Park, Peter; Matney, Jason; Liao, Zhongxing; Chang, Joe; Zhang, Xiaodong; Li, Yupeng; Zhu, Ronald X

2014-01-01

280

NASA Astrophysics Data System (ADS)

Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

GonzáLez, Pablo J.; FernáNdez, José

2011-10-01

281

NASA Astrophysics Data System (ADS)

Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.

Erdal, D.; Neuweiler, I.; Huisman, J. A.

2012-06-01

282

NASA Astrophysics Data System (ADS)

Visually servoed paired structured light system (ViSP) has been found to be useful in estimating 6-DOF relative displacement. The system is composed of two screens facing each other, each with one or two lasers, a 2-DOF manipulator and a camera. The displacement between two sides is estimated by observing positions of the projected laser beams and rotation angles of the manipulators. To apply the system to massive structures, the whole area should be partitioned and each ViSP module is placed in each partition in a cascaded manner. The estimated displacement between adjoining ViSPs is combined with the next partition so that the entire movement of the structure can be estimated. The multiple ViSPs, however, have a major problem that the error is propagated through the partitions. Therefore, a displacement estimation error back-propagation (DEEP) method which uses Newton-Raphson or gradient descent formulation inspired by the error back-propagation algorithm is proposed. In this method, the estimated displacement from the ViSP is updated using the error back-propagated from a fixed position. To validate the performance of the proposed method, various simulations and experiments have been performed. The results show that the proposed method significantly reduces the propagation error throughout the multiple modules.

Jeon, H.; Shin, J. U.; Myung, H.

2013-04-01

283

In order to estimate the possible errors introduced by deviations from the spherical particle shape in Stokes' law estimates of dust particle sizes, a model study was made with objects of various shapes falling in oil of high viscosity. It was found that all shapes fall more slowly than the sphere of the same mass and volume. The true size

Wulf B. Kunkel

1948-01-01

284

Lucy, D.; Pollard, A.M. Title: Further comments on the estimation of error

with the gustafson dental age estimation method Journal: Journal of Forensic Sciences Date: 1995 Volume: 40(2) Pages Estimation Method D.Lucy and A.M.Pollard, Department of Archaeological Sciences, University of Bradford address. Abstract: Many researchers in the field of forensic odontology have questioned the error

Lucy, David

285

TEA: Transmission Error Approximation for Distance Estimation between Two Zigbee Devices

This paper proposes a simple and cost- effective method named Transmission Error Approximation (TEA) for estimating the distance between two Zigbee devices. The idea is to measure and analyze statistically packet loss rates for approximate distance estimation. We have implemented an experimental prototype for the TEA using Zigbee protocol. Measurement results show that TEA is a cost-effective way of distance

Weijun Xiao; Yan Sun; Yinan Liu; Qing Yang

2006-01-01

286

Estimation of Structural Error in the Community Land Model Using Latent Heat Observations

and mass transfer on and under the ground Â A collection of 1D PDEs (depth), coupled by algebraic equationsEstimation of Structural Error in the Community Land Model Using Latent Heat Observations J. Ray, M of Latent Heat (LH) at 2 sites Â Estimate 3 hydrological parameters to which LH is sensitive, along

Ray, Jaideep

287

ESTIMATING THE ERROR DISTRIBUTION OF A TAP SEQUENCE WITHOUT GROUND TRUTH1

times, and after a human taps along with the mu- sic, some simple statistics can describeESTIMATING THE ERROR DISTRIBUTION OF A TAP SEQUENCE WITHOUT GROUND TRUTH1 Roger B. Dannenberg of Statistics ABSTRACT Detecting beats, estimating tempo, aligning scores to au- dio, and detecting onsets

Dannenberg, Roger B.

288

Toward Understanding and Reducing Errors in Real-Time Estimation of Travel Times Sirisha M. Kothuri

Toward Understanding and Reducing Errors in Real-Time Estimation of Travel Times Sirisha M. Kothuri traveler information to the public. Many states as well as private contractors are providing real-time travel time estimates to commuters to help improve the quality and efficiency of their trips. Accuracy

Bertini, Robert L.

289

Development of error estimation method for phase detection in phase shift method

NASA Astrophysics Data System (ADS)

In this report, error estimation method of phase detection in phase shift method is proposed. Phase detection algorithm extracts phase of modulated signal from several numbers of interferogram that acquired during phase shifting. The fourier domain expression of phase detection algorithms show frequency response for sine and cosine components. And it shows behavior for phase detection in the case that phase shifting error exists. However, these two response functions, those are response function for sine component and that for cosine component, do not directly show frequency response of phase detection itself. On the contrary, newly developed frequency response function, which is derived from these two frequency response function, directly shows frequency response of phase detection. And it clearly shows the behavior of phase detection algorithm when phase tuning error exists. The newly developed frequency response function is similar to the Bode plot. The magnitude plot shows sensitivity for frequency components. And the phase plot can be used for error estimation of phase detection. There is good agreement between the developed frequency response function and calculated error value. These results of comparison between error estimation using developed frequency response function and calculated error value are shown in this report.

Hanayama, Ryohei; Hibino, Kenichi

2011-05-01

290

Error estimation of phase detection algorithms and comparison of window functions

NASA Astrophysics Data System (ADS)

Error estimation method of phase detection in phase shift method is proposed. Phase detection algorithms extract phase of fringes from several interferograms that are acquired during phase shifting. The Fourier domain expression of phase detection algorithms show frequency response for sine and cosine components, and it shows behavior of detected phase in the case if phase shifting error exists. However, these two response functions do not directly show frequency response of phase detection itself. On the contrary, newly proposed frequency response function directly shows frequency response of phase detection. And it clearly shows the behavior of phase detection algorithm when phase tuning error exists. The proposed method is inspired by the Bode plot. It is easy to assume that magnitude plot also can be defined in addition to the phase plot. The magnitude plot can be used for prediction of the sensitivity to the signal and noise. And the phase plot can be used for error estimation of phase detection in the presence of phase tuning error. After some investigations, it was found that there is good agreement between the developed frequency response function and calculated error value. Therefore, it can be used as an error estimation method for phase detection algorithm. A window function modifies specifications of phase detection algorithm. Comparisons of several numbers of window functions on phase detection method were demonstrated using proposed method. Additionally, we discuss window function, which makes phase detection algorithm insensitive to phase detuning.

Hanayama, Ryohei; Hibino, Kenichi

2012-09-01

291

Solving large tomographic linear systems: size reduction and error estimation

NASA Astrophysics Data System (ADS)

We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.

Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust

2014-10-01

292

Analysis of systematic errors in lateral shearing interferometry for EUV optical testing

Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.

Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.

2009-02-24

293

NASA Astrophysics Data System (ADS)

Monitoring global climate change requires measuring atmospheric parameters with sufficient coverage on the surface, but also in the free atmosphere. GPS Radio Occultation (RO) provides accurate and precise measurements in the Upper Troposphere-Lower Stratosphere region with global coverage and long-term stability thanks to a calibration inherent to the technique. These properties allow for the calculation of climatological variables of high quality to track small changes of these variables. High accuracy requires keeping systematic errors low. The purpose of this study is to examine the impact of the Quality Control (QC) mechanism applied in the retrieval system of the Wegener Center for Climate and Global Change, Karl-Franzens-University Graz (WEGC), on systematic errors of climatologies calculated from RO data. The current RO retrieval OPSv5.4 at the WEGC uses phase delay profiles and precise orbit information provided by other data centers, mostly by UCAR/CDAAC, Boulder, CO, USA for various receiver satellites. The satellites analyzed in this study are CHAMP, GRACE-A and FORMOSAT-3/COSMIC. Profiles of bending angles, refractivity and atmospheric parameters are retrieved and these are used to calculate climatologies. The OPSv5.4 QC rejects measurements if they do not fulfill certain quality criteria. If these criteria cause a biased rejection with regard to the spatial or temporal distribution of measurements it can increase the systematic component of the so-called Sampling Error (SE) in climatologies. The SE is a consequence of the discrete and finite number of RO measurements that do not completely resemble the total variability of atmospheric parameters. The results of the calculations conducted show that the QC of the retrieval system indeed has a strong influence on geographical sampling patterns, causing a large number of rejections at high latitudes in the respective winter hemisphere. During winter, a monthly average of up to 60 % of all measurements are discarded at high latitudes. The QC also influences temporal sampling patterns systematically, more measurements are rejected during nighttime. The systematic rejections by the QC also have a strong effect on the SE, causing it to increase fourfold in some cases and regions. Measurements of cold temperatures are particularly affected, in these cases derived climatologies are biased towards higher temperatures. The results and new insight gained are used to improve the QC of following processing system versions.

Schwarz, Jakob; Scherllin-Pirscher, Barbara; Foelsche, Ulrich; Kirchengast, Gottfried

2013-04-01

294

A study including eight microsatellite loci for 1,014 trees from seven mapped stands of the partially clonal Populus euphratica was used to demonstrate how genotyping errors influence estimates of clonality. With a threshold of 0 (identical multilocus\\u000a genotypes constitute one clone) we identified 602 genotypes. A threshold of 1 (compensating for an error in one allele) lowered\\u000a this number to

M. Schnittler; P. Eusemann

2010-01-01

295

Estimated global incidence of Japanese encephalitis: a systematic review

Abstract Objective To update the estimated global incidence of Japanese encephalitis (JE) using recent data for the purpose of guiding prevention and control efforts. Methods Thirty-two areas endemic for JE in 24 Asian and Western Pacific countries were sorted into 10 incidence groups on the basis of published data and expert opinion. Population-based surveillance studies using laboratory-confirmed cases were sought for each incidence group by a computerized search of the scientific literature. When no eligible studies existed for a particular incidence group, incidence data were extrapolated from related groups. Findings A total of 12 eligible studies representing 7 of 10 incidence groups in 24 JE-endemic countries were identified. Approximately 67?900 JE cases typically occur annually (overall incidence: 1.8 per 100?000), of which only about 10% are reported to the World Health Organization. Approximately 33?900 (50%) of these cases occur in China (excluding Taiwan) and approximately 51?000 (75%) occur in children aged 0–14 years (incidence: 5.4 per 100?000). Approximately 55?000 (81%) cases occur in areas with well established or developing JE vaccination programmes, while approximately 12?900 (19%) occur in areas with minimal or no JE vaccination programmes. Conclusion Recent data allowed us to refine the estimate of the global incidence of JE, which remains substantial despite improvements in vaccination coverage. More and better incidence studies in selected countries, particularly China and India, are needed to further refine these estimates. PMID:22084515

Campbell, Grant L; Hills, Susan L; Fischer, Marc; Jacobson, Julie A; Hoke, Charles H; Hombach, Joachim M; Marfin, Anthony A; Solomon, Tom; Tsai, Theodore F; Tsu, Vivien D

2011-01-01

296

Systematic Errors in Stereo PIV When Imaging through a Glass Window

NASA Technical Reports Server (NTRS)

This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

Green, Richard; McAlister, Kenneth W.

2004-01-01

297

Policy Gradient Based Semi-Markov Decision Problems: Approximation and Estimation Errors

NASA Astrophysics Data System (ADS)

In [1] and [2] we have presented a simulation-based algorithm for optimizing the average reward in a parameterized continuous-time, finite-state semi-Markov Decision Process (SMDP). We approximated the gradient of the average reward. Then, a simulation-based algorithm was proposed to estimate the approximate gradient of the average reward (called GSMDP), using only a single sample path of the underlying Markov chain. GSMDP was proved to converge with probability 1. In this paper, we give bounds on the approximation and estimation errors for GSMDP algorithm. The approximation error of that approximation is the size of the difference between the true gradient and the approximate gradient. The estimation error, the size of the difference between the output of the algorithm and its asymptotic output, arises because the algorithm sees only a finite data sequence.

Vien, Ngo Anh; Lee, Seunggwan; Chung, Taechoong

298

Second-order moment estimation of pointing errors in laser pointing system using return photons

NASA Astrophysics Data System (ADS)

Boresight and jitter that cause energy loss and decline of the system performance are two fundamental pointing errors for a laser pointing system. Based on the statistics of the return photons reflected from target and estimation of the pointing errors, a second-order moment estimation algorithm is proposed. This algorithm that is the expansion of the Key-rate method can estimate boresight and jitter simultaneously. In this paper, a laser pointing system model based on a Gaussian far-field irradiance profile and a Gaussian beam jitter is setup, a laboratory experiment is performed, and the simulation and experimental data is processed by this estimator. The results demonstrate that the performance of the second-order moment estimation is excellent and the performance improves with the increasing number of shots. What's more, the further study finds that the experimental results agree well with the simulation results.

Zhou, Lei; Ren, Ge; Tan, Yi

2012-10-01

299

NASA Astrophysics Data System (ADS)

Understanding the sources of systematic errors in climate models is challenging because of coupled feedbacks and errors compensation. The developing seamless approach proposes that the identification and the correction of short term climate model errors have the potential to improve the modeled climate on longer time scales. In previous studies, initialised atmospheric simulations of a few days have been used to compare fast physics processes (convection, cloud processes) among models. The present study explores how initialised seasonal to decadal hindcasts (re-forecasts) relate transient week-to-month errors of the ocean and atmospheric components to the coupled model long-term pervasive SST errors. A protocol is designed to attribute the SST biases to the source processes. It includes five steps: (1) identify and describe biases in a coupled stabilized simulation, (2) determine the time scale of the advent of the bias and its propagation, (3) find the geographical origin of the bias, (4) evaluate the degree of coupling in the development of the bias, (5) find the field responsible for the bias. This strategy has been implemented with a set of experiments based on the initial adjustment of initialised simulations and exploring various degrees of coupling. In particular, hindcasts give the time scale of biases advent, regionally restored experiments show the geographical origin and ocean-only simulations isolate the field responsible for the bias and evaluate the degree of coupling in the bias development. This strategy is applied to four prominent SST biases of the IPSLCM5A-LR coupled model in the tropical Pacific, that are largely shared by other coupled models, including the Southeast Pacific warm bias and the equatorial cold tongue bias. Using the proposed protocol, we demonstrate that the East Pacific warm bias appears in a few months and is caused by a lack of upwelling due to too weak meridional coastal winds off Peru. The cold equatorial bias, which surprisingly takes 30 years to develop, is the result of an equatorward advection of midlatitude cold SST errors. Despite large development efforts, the current generation of coupled models shows only little improvement. The strategy proposed in this study is a further step to move from the current random ad hoc approach, to a bias-targeted, priority setting, systematic model development approach.

Vannière, Benoît; Guilyardi, Eric; Toniazzo, Thomas; Madec, Gurvan; Woolnough, Steve

2014-10-01

300

The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

Stenroos, Matti; Hauk, Olaf

2013-01-01

301

Systematic Entomology (2012), 37, 287304 Divergence estimates and early evolutionary history

Systematic Entomology (2012), 37, 287Â304 Divergence estimates and early evolutionary history E L A H I . M O R I T A 2,3 and S I M O N V A N N O O R T 4,5 1 Systematic Entomology Lab, USDA, Washington, DC, U.S.A., 2 Department of Entomology, National Museum of Natural History, Smithsonian

Hammerton, James

302

Estimation of the uncertainty propagation in verification operator of cylindricity errors

NASA Astrophysics Data System (ADS)

According to the operation and operator theory in the new generation geometrical product specification (GPS), the verification operator of cylindricity errors is an ordered set of several feature operations, including partition, extraction, filtration, association, and so on. Each feature operation contains some uncertainty due to the variability of measurement process and the incompleteness of ISO specification, and the uncertainty in previous operation can be transferred to the subsequent operation, resulting in cylindricity errors with high uncertainty. To ensure accurate evaluation of cylindricity errors, a method is proposed for estimation of uncertainty propagation in verification operators of cylindricity errors based on the new generation GPS. By investigating the propagation model of the uncertainty of cylindricity errors, the calculation formulas for the uncertainty propagation of key feature operations in operator, such as association operation and filtration operation, are proposed. The uncertainty calculation expression is developed for the choice of verification operators of cylindricity errors. The effects of filtration operation and association operation on the uncertainty of cylindricity errors are further verified through a case study. Test results indicate that the proposed method can not only improve the veracity of evaluation, but also provide helpful guidance for the reasonable choice of verification operators of cylindricity errors.

Zhao, Fengxia; Zhang, Linna; Zheng, Peng

2010-08-01

303

A note on bias and mean squared error in steady-state quantile estimation

NASA Astrophysics Data System (ADS)

When using a batch means methodology for estimation of a nonlinear function of a steady-state mean from the output of simulation experiments, it has been shown that a jackknife estimator may reduce the bias and mean squared error (mse) compared to the classical estimator, whereas the average of the classical estimators from the batches (the batch means estimator) has a worse performance from the point of view of bias and mse. In this paper we show that, under reasonable assumptions, the performance of the jackknife, classical and batch means estimators for the estimation of quantiles of the steady-state distribution exhibit similar properties as in the case of the estimation of a nonlinear function of a steady-state mean. We present some experimental results from the simulation of the waiting time in queue for an M/M/1 system under heavy traffic.

Muñoz, David F.; Ramírez-López, Adán

2013-10-01

304

ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve?

In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

2014-01-01

305

In this article, we consider flexible seasonal time series models which consist of a common trend function over periods and additive individual trend (seasonal effect) functions. The consistency and asymptotic normality of the local linear estimators were obtained under the $\\alpha$-mixing conditions and without specifying the error distribution. We develop these results to consistency and asymptotic normality of local linear estimates by using central limit theorems for flexible seasonal time series model, which error terms are $k$-weak dependent and $\\lambda$-weak dependent random variables.

Kyong-Hui Kim; Hak-Myong Pak

2014-03-10

306

Inexpensive devices to measure solar UV irradiance are available to monitor atmospheric ozone, for example, total ozone portable spectroradiometers (TOPS instruments). A procedure to convert these measurements into ozone estimates is examined. For well-characterized filters with 7-nm FWHM bandpasses, the method provides ozone values (from 304- and 310-nm channels) with less than 0.4% error attributable to inversion of the theoretical model. Analysis of sensitivity to model assumptions and parameters yields estimates of ±3% bias in total ozone results with dependence on total ozone and path length. Unmodeled effects of atmospheric constituents and instrument components can result in additional ±2% errors. PMID:21127623

Flynn, L E; Labow, G J; Beach, R A; Rawlins, M A; Flittner, D E

1996-10-20

307

ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.

In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

2014-01-01

308

Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter

NASA Astrophysics Data System (ADS)

The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

Stephenson, Edward; Imig, Astrid

2009-10-01

309

An underlying assumption of satellite data assimilation systems is that the radiative transfer model used to simulate observed satellite radiances has no errors. For practical reasons a fast-forward radiative transfer model is used instead of a highly accurate line-by-line model. The fast model usually replaces the spectral integration of spectral quantities with their monochromatic equivalents, and the errors due to these approximations are assumed to be negligible. The reflected downward flux term contains many approximations of this nature, which are shown to introduce systematic errors. In addition, many fast-forward radiative transfer models simulate the downward flux as the downward radiance along a path defined by the secant of the mean emergent angle, the diffusivity factor. The diffusivity factor is commonly set to 1.66 or to the secant of the satellite zenith angle. Neither case takes into account that the diffusivity factor varies with optical depth, which introduces further errors. I review the two most commonly used methods for simulating reflected downward flux by fast-forward radiative transfer models and point out their inadequacies and limitations. An alternate method of simulating the reflected downward flux is proposed. This method transforms the surface-to-satellite transmittance profile to a transmittance profile suitable for simulating the reflected downward flux by raising the former transmittance to the power of kappa, where kappa itself is a function of channel, surface pressure, and satellite zenith angle. It is demonstrated that this method reduces the fast-forward model error for low to moderate reflectivities. PMID:15098841

Turner, David S

2004-04-10

310

NASA Astrophysics Data System (ADS)

An underlying assumption of satellite data assimilation systems is that the radiative transfer model used to simulate observed satellite radiances has no errors. For practical reasons a fast-forward radiative transfer model is used instead of a highly accurate line-by-line model. The fast model usually replaces the spectral integration of spectral quantities with their monochromatic equivalents, and the errors due to these approximations are assumed to be negligible. The reflected downward flux term contains many approximations of this nature, which are shown to introduce systematic errors. In addition, many fast-forward radiative transfer models simulate the downward flux as the downward radiance along a path defined by the secant of the mean emergent angle, the diffusivity factor. The diffusivity factor is commonly set to 1.66 or to the secant of the satellite zenith angle. Neither case takes into account that the diffusivity factor varies with optical depth, which introduces further errors. I review the two most commonly used methods for simulating reflected downward flux by fast-forward radiative transfer models and point out their inadequacies and limitations. An alternate method of simulating the reflected downward flux is proposed. This method transforms the surface-to-satellite transmittance profile to a transmittance profile suitable for simulating the reflected downward flux by raising the former transmittance to the power of kappa, where kappa itself is a function of channel, surface pressure, and satellite zenith angle. It is demonstrated that this method reduces the fast-forward model error for low to moderate reflectivities.

Turner, David S.

2004-04-01

311

NASA Technical Reports Server (NTRS)

Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

2009-01-01

312

Multiscale Systematic Error Correction via Wavelet-Based Bandsplitting in Kepler Data

NASA Astrophysics Data System (ADS)

The previous presearch data conditioning algorithm, PDC-MAP, for the Kepler data processing pipeline performs very well for the majority of targets in the Kepler field of view. However, for an appreciable minority, PDC-MAP has its limitations. To further minimize the number of targets for which PDC-MAP fails to perform admirably, we have developed a new method, called multiscale MAP, or msMAP. Utilizing an overcomplete discrete wavelet transform, the new method divides each light curve into multiple channels, or bands. The light curves in each band are then corrected separately, thereby allowing for a better separation of characteristic signals and improved removal of the systematics.

Stumpe, Martin C.; Smith, Jeffrey C.; Catanzarite, Joseph H.; Van Cleve, Jeffrey E.; Jenkins, Jon M.; Twicken, Joseph D.; Girouard, Forrest R.

2014-01-01

313

Background As the error rate is high and the distribution of errors across sites is non-uniform in next generation sequencing (NGS) data, it has been a challenge to estimate DNA polymorphism (?) accurately from NGS data. Results By computer simulations, we compare the two methods of data acquisition - sequencing each diploid individual separately and sequencing the pooled sample. Under the current NGS error rate, sequencing each individual separately offers little advantage unless the coverage per individual is high (>20X). We hence propose a new method for estimating ? from pooled samples that have been subjected to two separate rounds of DNA sequencing. Since errors from the two sequencing applications are usually non-overlapping, it is possible to separate low frequency polymorphisms from sequencing errors. Simulation results show that the dual applications method is reliable even when the error rate is high and ? is low. Conclusions In studies of natural populations where the sequencing coverage is usually modest (~2X per individual), the dual applications method on pooled samples should be a reasonable choice. PMID:23919637

2013-01-01

314

Two novel approaches are developed for direction-of-arrival (DOA) estimation and functional brain imaging estimation, which are denoted as ReIterative Super-Resolution (RISR) and Source AFFine Image REconstruction (SAFFIRE), ...

Chan, Tsz Ping

2008-07-25

315

NASA Astrophysics Data System (ADS)

In this paper we derive a posteriori error estimates for the compositional model of multiphase Darcy flow in porous media, consisting of a system of strongly coupled nonlinear unsteady partial differential and algebraic equations. We show how to control the dual norm of the residual augmented by a nonconformity evaluation term by fully computable estimators. We then decompose the estimators into the space, time, linearization, and algebraic error components. This allows to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver when the corresponding error components do not affect significantly the overall error. Moreover, the spatial and temporal error components can be balanced by time step and space mesh adaptation. Our analysis applies to a broad class of standard numerical methods, and is independent of the linearization and of the iterative algebraic solvers employed. We exemplify it for the two-point finite volume method with fully implicit Euler time stepping, the Newton linearization, and the GMRes algebraic solver. Numerical results on two real-life reservoir engineering examples confirm that significant computational gains can be achieved thanks to our adaptive stopping criteria, already on fixed meshes, without any noticeable loss of precision.

Di Pietro, Daniele A.; Flauraud, Eric; Vohralík, Martin; Yousef, Soleiman

2014-11-01

316

NASA Astrophysics Data System (ADS)

SummaryCharacterisation of the error structure of radar quantitative precipitation estimation (QPE) is a major issue for applications of radar technology in hydrological modelling. Due to the variety of sources of error in radar QPE and the impact of correction algorithms, the problem can only be addressed practically by comparison of radar QPEs with reference values derived from ground-based measurements. Using the radar and raingauge datasets of the Bollène-2002 experiment, a preliminary investigation of this subject has been carried out within a geostatistical framework in the context of the Cévennes-Vivarais Mediterranean Hydrometeorological Observatory. First, raingauge measurements were critically analysed using variograms to detect erroneous squared differences between pairs of raingauge values. The anisotropic block kriging technique was then used to compute and select the most reliable reference values. The statistical distribution and the spatial and temporal structure of the residuals between radar and reference values was established and analysed. The error variance separation concept was also tested to estimate the variance of the residuals between the radar estimates and the true unknown rainfall. A preliminary version of the radar QPE error model was established for 1-km2 domains and time steps ranging from 1 to 12 h using the limited data sample of the Bollène-2002 experiment. The error model is dependent on the time scales considered and needs to be conditioned on the rain rate and rainfall type as well as on the radar range.

Kirstetter, Pierre-Emmanuel; Delrieu, Guy; Boudevillain, Brice; Obled, Charles

2010-11-01

317

A Comprehensive Aerological Reference Data Set (CARDS): Rough and Systematic Errors.

NASA Astrophysics Data System (ADS)

The possibility of anthropogenic climate change and the possible problems associated with it are of great interest. However, one cannot study climate change without climate data. The Comprehensive Aerological Reference Data Set (CARDS) project will produce high-quality, daily upper-air data for the research community and for policy makers. CARDS intends to produce a dataset consisting of radiosonde and pibal data that is easy to use, as complete as possible, and as free of errors as possible. An attempt will be made to identify and correct biases in upper-air data whenever possible. This paper presents the progress made to date in achieving this goal.An advanced quality control procedure has been tested and implemented. It is capable of detecting and often correcting errors in geopotential height, temperature, humidity, and wind. This unique quality control method uses simultaneous vertical and horizontal cheeks of several meteorological variables. It can detect errors that other methods cannot.Research is being supported in the statistical detection of sudden changes in time series data. The resulting statistical technique has detected a known humidity bias in the U.S. data. The methods should detect unknown changes in instrumentation, station location, and data-reduction techniques. Software has been developed that corrects radiosonde temperatures, using a physical model of the temperature sensor and its changing environment. An algorithm for determining cloud cover for this physical model has been developed. A numerical check for station elevation based on the hydrostatic equations has been developed, which has identified documented and undocumented station moves. Considerable progress has been made toward the development of algorithms to eliminate a known bias in the U.S. humidity data.

Eskridge, Robert E.; Alduchov, Oleg A.; Chernykh, Irina V.; Panmao, Zhai; Polansky, Arthur C.; Doty, Stephen R.

1995-10-01

318

Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

NASA Astrophysics Data System (ADS)

Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 ? error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 ? errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

2014-10-01

319

NASA Technical Reports Server (NTRS)

Aggregation formulas are given for production estimation of a crop type for a zone, a region, and a country, and methods for estimating yield prediction errors for the three areas are described. A procedure is included for obtaining a combined yield prediction and its mean-squared error estimate for a mixed wheat pseudozone.

Chhikara, R. S.; Feiveson, A. H. (principal investigator)

1979-01-01

320

Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval

NASA Technical Reports Server (NTRS)

Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We validate our estimated smoothing errors by comparing the SBUV ozone profiles with other ozone profiling sensors.

Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

2011-01-01

321

Nuclear power plant fault-diagnosis using neural networks with error estimation

The assurance of the diagnosis obtained from a nuclear power plant (NPP) fault-diagnostic advisor based on artificial neural networks (ANNs) is essential for the practical implementation of the advisor to fault detection and identification. The objectives of this study are to develop an error estimation technique (EET) for diagnosis validation and apply it to the NPP fault-diagnostic advisor. Diagnosis validation is realized by estimating error bounds on the advisor`s diagnoses. The 22 transients obtained from the Duane Arnold Energy Center (DAEC) training simulator are used for this research. The results show that the NPP fault-diagnostic advisor are effective at producing proper diagnoses on which errors are assessed for validation and verification purposes.

Kim, K.; Bartlett, E.B.

1994-12-31

322

NASA Astrophysics Data System (ADS)

We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

Jones, Reese E.; Mandadapu, Kranthi K.

2012-04-01

323

The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

Nash, Ulrik W.

2014-01-01

324

Error estimation using hypersingular integrals in boundary element methods for linear elasticity

Error estimation using hypersingular integrals in boundary element methods for linear elasticity method are discussed. q 2001 Elsevier Science Ltd. All rights reserved. Keywords: Hypersingular integrals the standard boundary integral equation (BIE) and the hypersingular BIE (HBIE). This, so called `two level

Paulino, Glaucio H.

325

Report no. 04/17 Sharp error estimates for a discretisation

approximation of the constant coefficient 1D convection/diffusion equation with Dirac initial data. The analysisReport no. 04/17 Sharp error estimates for a discretisation of the 1D convection/diffusion equation Analysis Group Wolfson Building Parks Road Oxford, England OX1 3QD August, 2004 #12;2 1 Introduction

Giles, Mike

326

We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.

Archibald, Richard K [ORNL] [ORNL; Deiterding, Ralf [ORNL] [ORNL; Hauck, Cory D [ORNL] [ORNL; Jakeman, John D [ORNL] [ORNL; Xiu, Dongbin [ORNL] [ORNL

2012-01-01

327

The equilibrated residual method is now accepted as the best residual type a posteriori error estimator. Nevertheless, there remains a gap in the theory and practice of the method. The present work tackles the problem of existence, construction and stability of equilibrated fluxes for hp-finite element approximation on hybrid meshes consisting of quadrilateral and triangular elements, with hanging nodes. A

Mark Ainsworth; Leszek Demkowicz; Chang-Wan Kim

2007-01-01

328

A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

ERIC Educational Resources Information Center

Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

2011-01-01

329

) Abstract We propose a numerical analysis of Proper Orthogonal Decomposition (POD) model reductions in whichGalerkin approximation with Proper Orthogonal Decomposition: new error estimates and illustrative reduction. The Reduced Basis [20, 23, 21, 27, 24] and the Proper Orthogonal Decomposition (POD) [19, 15, 17

Paris-Sud XI, UniversitÃ© de

330

would like to express my gratitude to Dr. Nader Ghahramani who has been a paragon to me. vii TABLE OF CONTENTS CHAPTER Page I ESTIMATION OF THE MISCLASSIFICATION ERROR RATE : 1 A. Classi cation Problem . . . . . . . . . . . . . . . . . . . . . . . 1... 1. Complete Knowledge of Underlying Distributions . . . . 2 2. Parametric Models . . . . . . . . . . . . . . . . . . . . . . . 3 3. Non-parametric Models . . . . . . . . . . . . . . . . . . . . 3 B. Linear Discriminant Analysis...

Zollanvari, Amin

2012-02-14

331

Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel describe the performance of a solid oxide fuel cell requires the solution of an inverse problem. Two at the electrodeelectrolyte interfaces of solid oxide fuel cells (SOFC) is investigated physically using Electrochemical

Renaut, Rosemary

332

Error estimations for source inversions in seismology and Rivera, L.(1)

Error estimations for source inversions in seismology and geodesy Rivera, L.(1) , Duputel, Z.(1 Laboratory, Caltech, Pasadena, U.S.A. email hiroo@gps.caltech.edu Source inversion is a powerful and widely, geodetic), the observation scale (e.g. regional, teleseismic), and the time at which it is performed after

Duputel, Zacharie

333

A Posteriori Error Estimate for Front-Tracking for Nonlinear Systems of Conservation Laws

A Posteriori Error Estimate for Front-Tracking for Nonlinear Systems of Conservation Laws M-tracking approximate solutions to hyperbolic systems of nonlin- ear conservation laws. Extending the L 1 -stability-tracking approximations for nonlinear conservation laws, u t + f(u) x = 0 ; (1.1) #3; Supported by the Fonds pour la

334

Estimation and control of multiple testing error rates for microarray studies

The analysis of microarray data often involves performing a large number of statistical tests, usually at least one test per queried gene. Each test has a certain probability of reaching an incorrect inference; therefore, it is crucial to estimate or control error rates that measure the occurrence of erroneous conclusions in reporting and interpreting the results of a microarray study.

Stan Pounds

2006-01-01

335

Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator

This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the \\

Y. Ephraim; D. Malah

1984-01-01

336

ERIC Educational Resources Information Center

This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

Cui, Zhongmin; Kolen, Michael J.

2008-01-01

337

ERIC Educational Resources Information Center

In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

Bond, William Glenn

2012-01-01

338

Error estimation and adaptive order nodal method for solving multidimensional transport problems

The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method,

O. M. Zamonsky; C. J. Gho; Y. Y. Azmy

1998-01-01

339

Cross versus Within-Company Cost Estimation Studies: A Systematic Review

OBJECTIVE - The objective of this paper is to determine under what circumstances individual organisations would be able to rely on cross-company-based estimation models. METHOD - We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. RESULTS - Ten papers compared cross- and within-company estimation models,

Barbara A. Kitchenham; Emilia Mendes; Guilherme Horta Travassos

2007-01-01

340

NASA Astrophysics Data System (ADS)

A model for the statistical distribution of radar rainfall estimate errors has been developed empirically from WSR 88D and rain gauge data. The model expresses the expected value of actual rainfall and the mean and standard deviation of the multiplicative error in the radar estimate as functions of the radar estimate itself and several power law parameters derived from an historic sample of rain gauge/radar pairs. The model enables an end user of the radar estimates to determine the expected value of point rainfall, the probability that rainfall is less than or greater than a given value, and the probability that the true rainfall is within a given interval. Experiments with data from several WSR 88D umbrellas indicate that the basic form of the model is valid at most or all locations within the conterminous United States, though parameter adjustments for long-term radar bias and rainfall climatology must be made. The error model has several potential applications in radar hydrology, such as determining the probability that rainfall exceeds flash flood-producing thresholds, quality control of real-time rain gauge estimates, and construction of ensembles of rainfall fields. The development methodology and examples of operational applications will be presented.

Kitzmiller, D.; Fulton, R.; Guan, S.; Ding, F.; Krajewski, W. F.; Villarini, G.; Ciach, G. J.

2006-05-01

341

There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS) testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first "n" trials (3 ? n ? 8) of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]). When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98) and coefficients of variation (3.4-7.0%). The Wilcoxon's signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001) and higher (p < 0.001), respectively, than those obtained using other methods. The Bland-Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ? 24) could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors. PMID:25414753

Pekünlü, Ekim; Ozsu, Ilbilge

2014-09-29

342

Two large-scale environmental surveys, the National Stream Survey (NSS) and the Environmental Protection Agency's proposed Environmental Monitoring and Assessment Program (EMAP), motivated investigation of estimators of the variance of the Horvitz-Thompson estimator under variabl...

343

Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

2012-01-01

344

This paper considers the problem of estimation in a general semiparametric regression model when error-prone covariates are modeled parametrically while covariates measured without error are modeled nonparametrically. To account for the effects of measurement error, we apply a correction to a criterion function. The specific form of the correction proposed allows Monte Carlo simulations in problems for which the direct calculation of a corrected criterion is difficult. Therefore, in contrast to methods that require solving integral equations of possibly multiple dimensions, as in the case of multiple error-prone covariates, we propose methodology which offers a simple implementation. The resulting methods are functional, they make no assumptions about the distribution of the mismeasured covariates. We utilize profile kernel and backfitting estimation methods and derive the asymptotic distribution of the resulting estimators. Through numerical studies we demonstrate the applicability of proposed methods to Poisson, logistic and multivariate Gaussian partially linear models. We show that the performance of our methods is similar to a computationally demanding alternative. Finally, we demonstrate the practical value of our methods when applied to Nevada Test Site (NTS) Thyroid Disease Study data. PMID:22773940

Maity, Arnab; Apanasovich, Tatiyana V.

2011-01-01

345

Background In epidemiological studies, it is often not possible to measure accurately exposures of participants even if their response variable can be measured without error. When there are several groups of subjects, occupational epidemiologists employ group-based strategy (GBS) for exposure assessment to reduce bias due to measurement errors: individuals of a group/job within study sample are assigned commonly to the sample mean of exposure measurements from their group in evaluating the effect of exposure on the response. Therefore, exposure is estimated on an ecological level while health outcomes are ascertained for each subject. Such study design leads to negligible bias in risk estimates when group means are estimated from ‘large’ samples. However, in many cases, only a small number of observations are available to estimate the group means, and this causes bias in the observed exposure-disease association. Also, the analysis in a semi-ecological design may involve exposure data with the majority missing and the rest observed with measurement errors and complete response data collected with ascertainment. Methods In workplaces groups/jobs are naturally ordered and this could be incorporated in estimation procedure by constrained estimation methods together with the expectation and maximization (EM) algorithms for regression models having measurement error and missing values. Four methods were compared by a simulation study: naive complete-case analysis, GBS, the constrained GBS (CGBS), and the constrained expectation and maximization (CEM). We illustrated the methods in the analysis of decline in lung function due to exposures to carbon black. Results Naive and GBS approaches were shown to be inadequate when the number of exposure measurements is too small to accurately estimate group means. The CEM method appears to be best among them when within each exposure group at least a ’moderate’ number of individuals have their exposures observed with error. However, compared with CEM, CGBS is easier to implement and has more desirable bias-reducing properties in the presence of substantial proportions of missing exposure data. Conclusion The CGBS approach could be useful for estimating exposure-disease association in semi-ecological studies when the true group means are ordered and the number of measured exposures in each group is small. These findings have important implication for cost-effective design of semi-ecological studies because they enable investigators to more reliably estimate exposure-disease associations with smaller exposure measurement campaign than with the analytical methods that were historically employed. PMID:22947254

2012-01-01

346

The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2(4) full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors' impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors' influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world. PMID:25151444

Strömberg, Sten; Nistor, Mihaela; Liu, Jing

2014-11-01

347

NASA Astrophysics Data System (ADS)

This work presents advances on error estimation of three spatial approximations of the Discrete Ordinates (DO) method for solving the neutron transport equation. The three methods considered are the Diamond Difference (DD) method, the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic type (AHOT-C). The AHOT-N method is employed in constant, linear, quadratic and cubic orders of spatial approximation. The AHOT-C is used in constant, linear and quadratic approximations. Error norms for different problems in non-scattering or isotropic scattering media over two dimensional Cartesian geometries are evaluated. The problems are characterized with different levels of differentiability of the exact solution of the DO equations. The cell-wise error is computed as the difference between the cell-averaged flux calculated by each method and the cell-averaged exact value. The cell error values are used to determine L1, L2, and Linfinity discrete error norms. The results of this analysis demonstrate that while integral error norms, i.e. L1 and L 2, converge to zero with mesh refinement, the cell-wise, Linfinity, norm may not converge when the exact flux possesses discontinuities across the Singular Characteristic (SC). The results suggest that smearing (numerical diffusion) across the SC is the major source of error on the global scale. To mitigate the adverse effect of the smearing, we propose a new Singular Characteristic Tracking (SCT) algorithm which achieves cell-wise convergence even for the cases with discontinuous exact flux. Convergence is restored by hindering numerical diffusion across the SC when resolving the streaming operator in the standard inner sweep iterations. SCT solves two separate Step Characteristics stencils for two sub-cell defined by the intersection of the SC with a mesh cell. Compared to the standard DD, DD-SCT increases the L1 error norm rate of convergence (based on cell size) from 0.5 to 2 for uncollided discontinuous exact flux, and from 0.3 to 1.3 for discontinuous exact flux with isotropic scattering. To provide a confidence level to the spatial resolution of the DO equations, we have casted the AHOT-N method as a Discontinuous Petrov-Galerkin method. Within the mathematical framework of Finite Element Methods (FEM), we have derived an a posteriori error estimator that furnishes a bound on the global L2 error norm. When sufficient regularity is assumed of the adjoint solution, the error estimator is written as a function of the numerical solution's volume and surface residuals, and cell edges discontinuities, for which we present easily computable approximations. As a direct application of decomposing the global error norm estimator into local indicators, we have tested an Adaptive Mesh Refinement (AMR) strategy to enhance computational efficiency without compromising accuracy. In a Shielding Benchmark problem, we show that for the same level of tolerance in the L2 error norm, we can decrease the required number of unknowns (degrees of freedom) by a factor of 10 when comparing AMR to uniform refinement.

Duo, Jose Ignacio

2008-10-01

348

The development of a volumetric apparatus (also known as a Sieverts’ apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy)] [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy); Abate, Salvatore; Desiderio, Giovanni [DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende (CS), Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy)] [DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende (CS), Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy); Agostino, Raffaele Giuseppe [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy) [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy); DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende (CS), Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy)

2013-10-15

349

The development of a volumetric apparatus (also known as a Sieverts' apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature. PMID:24182129

Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; Agostino, Raffaele Giuseppe

2013-10-01

350

BACKGROUND: Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct

Benjamin G Jacob; Daniel A Griffith; Ephantus J Muturi; Erick X Caamano; John I Githure; Robert J Novak

2009-01-01

351

Mass load estimation errors utilizing grab sampling strategies in a karst watershed

Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

2003-01-01

352

DTI quality control assessment via error estimation from Monte Carlo simulations

NASA Astrophysics Data System (ADS)

Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.

Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

2013-03-01

353

NASA Astrophysics Data System (ADS)

The height of the atmospheric mixing layer is a key parameter for many applications where emissions from the surface are transported through the atmosphere. The mixing height can be estimated with various methods and algorithms applied to radiosonde or lidar data. However, while all these methods provide a value for the mixing height, typically none of them provides a measure of uncertainty. That is because the methods that retrieve mixing height commonly look for thresholds in vertical profiles of some measured or estimated quantity. Classical error propagation typically fails on such estimates. Therefore we propose an a posteriori method to estimate mixing height with uncertainty. The method relies on the knowledge of the measurement errors and on the concept of statistical confidence, derived from the Welch's t-test. It is based on a solid theoretical base. The errors obtained are comparable with those that one could obtain trough a Monte Carlo approach. It can be applied to all the problems involving the localization of a property in a sequence of data like time series, profiles, or generic signals.

Biavati, Gionata; Feist, Dietrich G.; Gerbig, Christoph; Kretschmer, Roberto

2014-05-01

354

The purpose of this study was to investigate the potential dosimetric effects of systematic rotational setup errors on prostate patients planned according to the RTOG P-0126 protocol, and to identify rotational tolerances about either the anterior-posterior (AP) or left-right (LR) axis, under which no correction in setup is required. Eight 3-dimensional conformal radiation therapy (3D-CRT) treatment plans were included in the study, half planned to give 7020 cGy in 39 fractions (P-0126 Arm 1) and the other half planned to give 7920 cGy in 44 fractions (P-0126 Arm 2). Systematic rotations of the pelvic anatomy were simulated in a commercial treatment planning system by rotating opposing apertures in the opposite direction to the simulated anatomy rotation. Rotations were incremented in steps of 2.5 deg. to a maximum of {+-}5.0 deg. and {+-}10.0 deg. the AP and LR axis respectively. Dose distributions were evaluated with respect to the planning objectives set out in the P-0126 protocol. For patients on Arm 2 of the study, maintaining the prescribed dose to 98% of the PTV was found to be problematic for superior-end-posterior rotations beyond 5.0 deg. The results also show that maintaining a rectal dose less than 7500 cGy to 15% of the volume can become problematic for cases of small rectal volume and large superior-end-anterior rotations. We found that setting rotational tolerances will depend on which Arm of the protocol the patient is, and how well the initial plan meets the protocol objectives. In general, we conclude that for rotations about the AP axis, no tolerance level is required; however, cases presenting extreme rotations should be investigated as routine practice. For rotations about the LR axis, we conclude that a tolerance level for patients on Arm 2 of the protocol should be set at {+-}5.0 deg. This tolerance represents the systematic setup error which would require correction if a variation to the initial plan was deemed unacceptable.

Cranmer-Sargison, Gavin [Saskatoon Cancer Centre, Saskatchewan Cancer Agency, Saskatoon, Saskatchewan (Canada)], E-mail: gavin.cranmer-sargison@scf.sk.ca

2008-10-01

355

NASA Astrophysics Data System (ADS)

In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

Jakeman, J. D.; Wildey, T.

2015-01-01

356

Estimation of errors on the PSF reconstruction process for myopic deconvolution

NASA Astrophysics Data System (ADS)

Images obtained with adaptive optics (AO) systems can be improved by using restoration techniques, the AO correction being only partial. However, these methods require an accurate knowledge of the system point spread function (PSF). Adaptive optics systems allow one to estimate the point spread function (PSF) during the science observation. Using data from the wave-front sensor (WFS), a direct estimation of the averaged parallel phase structure and an estimation of the noise on the measurements are provided. The averaged orthogonal phase structure (not seen by the system), and the aliasing covariance are estimated using an end-to-end AO simulation. Finally, the estimated PSF is reconstructed using the algorithm of Veran et al. (1997). 1 However, this reconstruction is non perfect. Several approximations are done (stationary resudual phase, gaussian phase, simulated aliasing, etc...) and can impact the optical transfer function (OTF) in the case of a rather poor correction. Our aim is to give an error budget for the whole PSF reconstruction process and to link this PSF reconstruction with a deconvolution algorithm that take into account this PSF variability. Indeed, a myopic deconvolution algorithm can be feed with a priori on the object and the PSF. The latter can be obtained by studying the PSF reconstruction error budget as follows in this paper. Finally, this work will lead to an estimation of the error on the deconvolved image allowing one to perform an accurate astrometry/ photometry on the observed objects and to strengthen the contrast in the images. We concluded that to neglect the global cross term or to estimate the aliasing on the measurements using simulations has no effect on the PSF reconstruction.

Exposito, J.; Gratadour, D.; Clénet, Y.; Rousset, G.; Mugnier, L.

2012-07-01

357

We present a data analysis pipeline for CMB polarization experiments, running from multi-frequency maps to the power spectra. We focus mainly on component separation and, for the first time, we work out the covariance matrix accounting for errors associated to the separation itself. This allows us to propagate such errors and evaluate their contributions to the uncertainties on the final products.The pipeline is optimized for intermediate and small scales, but could be easily extended to lower multipoles. We exploit realistic simulations of the sky, tailored for the Planck mission. The component separation is achieved by exploiting the Correlated Component Analysis in the harmonic domain, that we demonstrate to be superior to the real-space application (Bonaldi et al. 2006). We present two techniques to estimate the uncertainties on the spectral parameters of the separated components. The component separation errors are then propagated by means of Monte Carlo simulations to obtain the corresponding contributi...

Ricciardi, S; Natoli, P; Polenta, G; Baccigalupi, C; Salerno, E; Kayabol, K; Bedini, L; De Zotti, G; 10.1111/j.1365-2966.2010.16819.x

2010-01-01

358

This is the first of two articles concerning error estimation and adaptive refinement techniques applied to convective heat transfer problems. This study presents the detailed development of the proposed error estimator. The error estimator takes into account the coupling effects of the dependent variables (e.g., velocity and temperature) on the discretization error and, consequently, on the adaptive meshes. An averaging procedure is also proposed as a substitute for the smoothing/recovery procedure used to evaluate the gradients. The implementation of the averaging technique is simple and cost effective. Numerical experiments demonstrate that the proposed methodology results in a dramatic reduction in computational costs.

Franca, A.S.; Haghighi, K. [Purdue Univ., West Lafayette, IN (United States). Dept. of Agricultural and Biological Engineering

1996-06-01

359

NASA Technical Reports Server (NTRS)

Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

2000-01-01

360

When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

Shan, S.; Bevis, M.; Kendrick, E.; Mader, G.L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

2007-01-01

361

This paper describes an estimation and correction method for the two-dimensional (2D) position errors of a planar XY stage that is driven along the Y-axis by two linear motors. The 2D position errors of the stage were estimated and corrected based on measured motion errors from a conventional laser interferometer system. To compensate for the planar XY stage 2D position

Jooho Hwang; Chun-Hong Park; Chan-Hong Lee; Seung-Woo Kim

2006-01-01

362

Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding. PMID:24692025

Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

2014-06-01

363

Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and 14-month-olds’ responses to the omission of a recurring target, on either a 3- or 5-s cycle. At all ages (a) both fixation and pupil dilation measures were time locked to the periodicity of the test interval, and (b) estimation errors grew linearly with the length of the interval, suggesting that trademark interval timing is in place from 4 months. PMID:24979472

2014-01-01

364

Error estimation and adaptive order nodal method for solving multidimensional transport problems

The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.

Zamonsky, O.M.; Gho, C.J. [Bariloche Atomic Center, Rio Negro (Argentina). Instituto Balseiro; Azmy, Y.Y. [Oak Ridge National Lab., TN (United States)

1998-01-01

365

When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.

2013-07-23

366

When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

2013-01-01

367

Error estimation and adaptive mesh refinement in boundary element method, an overview

Further to the previous review article (Adv Engng Software 19(1) (1994) 21–32), this paper reviews more recent studies on the same subject by citing more than one hundred papers.The adaptive mesh refinement process is composed of three processes; the error estimation, the adaptive tactics and the mesh refinement processes. Therefore, in this paper, the existing studies are classified and discussed

E. Kita; N. Kamiya

2001-01-01

368

Estimating Antenna-Pointing Error Using A Focal-Plane Array

NASA Technical Reports Server (NTRS)

Common method of determining residual errors in pointing of paraboloidal-reflector microwave antennas involves constantly dithering antenna mechanically about estimated direction of source. For cases where expense of additional focal-plane collecting horns (and their amplifiers) justified, new method eliminates mechanical dithering. Outputs of multiple receiving feed horns processed to extract phase information indicative of direction of arrival of signal received from distant source.

Zohar, Shalhav; Vilnrotter, Victor A.

1994-01-01

369

The Maximum Entropy Method (MEM) is compared to the periodogram method (DFT) for the estimation of line spectra given an error-free autocorrelation function (ACF). In one computer simulation run, a 250 lag ACF was generated as the sum of 63 cosinusoids with given amplitudes, Ai, and wave numbers, fi. The wave numbers cover a band from 0 to 89.239 cm-1with

Paul F. Fougere; Hanscom AFB

1987-01-01

370

This work presents advances on error estimation of three spatial approximations of the Discrete Ordinates (DO) method for solving the neutron transport equation. The three methods considered are the Diamond Difference (DD) method, the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic type (AHOT-C). The AHOT-N method is employed in constant, linear, quadratic and

Jose Ignacio Duo

2008-01-01

371

Error estimates for finite-element Navier-Stokes solvers without standard Inf-Sup conditions

The authors establish error estimates for recently developed finite-element methods for incompressible viscous flow in domains\\u000a with no-slip boundary conditions. The methods arise by discretization of a well-posed extended Navier-Stokes dynamics for\\u000a which pressure is determined from current velocity and force fields. The methods use C\\u000a 1 elements for velocity and C\\u000a 0 elements for pressure. A stability estimate is

Jian-Guo Liu; Robert L. Pego

2009-01-01

372

NASA Technical Reports Server (NTRS)

Long-baseline space interferometers involving formation flying of multiple spacecraft hold great promise as future space missions for high-resolution imagery. The major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to control these spacecraft and their optics payloads in the specified configuration accurately. In this paper, we describe our effort toward fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present an estimation procedure that effectively extracts relative x/y translational exit pupil aperture deviations from the raw interferometric image with small estimation errors.

Lu, Hui-Ling; Cheng, Victor H. L.; Leitner, Jesse A.; Carpenter, Kenneth G.

2004-01-01

373

NASA Astrophysics Data System (ADS)

A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

2012-06-01

374

Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

NASA Technical Reports Server (NTRS)

Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

Abdol-Hamid, Khaled S.; Ghaffari, Farhad

2012-01-01

375

Adjoint-based error estimation and mesh adaptation for hybridized discontinuous Galerkin methods

NASA Astrophysics Data System (ADS)

We present a robust and efficient target-based mesh adaptation methodology, building on hybridized discontinuous Galerkin schemes for (nonlinear) convection-diffusion problems, including the compressible Euler and Navier-Stokes equations. Hybridization of finite element discretizations has the main advantage, that the resulting set of algebraic equations has globally coupled degrees of freedom only on the skeleton of the computational mesh. Consequently, solving for these degrees of freedom involves the solution of a potentially much smaller system. This not only reduces storage requirements, but also allows for a faster solution with iterative solvers. The mesh adaptation is driven by an error estimate obtained via a discrete adjoint approach. Furthermore, the computed target functional can be corrected with this error estimate to obtain an even more accurate value. The aim of this paper is twofold: Firstly, to show the superiority of adjoint-based mesh adaptation over uniform and residual-based mesh refinement, and secondly to investigate the efficiency of the global error estimate.

Woopen, M.; May, G.; Schütz, J.

2014-12-01

376

Error analysis of leaf area estimates made from allometric regression models

NASA Technical Reports Server (NTRS)

Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).

Feiveson, A. H.; Chhikara, R. S.

1986-01-01

377

Quantifying errors in coral-based ENSO estimates: Toward improved forward modeling of ?18O

NASA Astrophysics Data System (ADS)

The oxygen isotopic ratio (?18O) in tropical Pacific coral skeletons reflects past El Niño-Southern Oscillation (ENSO) variability, but the ?18O-ENSO relationship is poorly quantified. Uncertainties arise when constructing ?18O data sets, combining records from different sites, and converting between ?18O and sea surface temperature (SST) and salinity (SSS). Here we use seasonally resolved ?18O from 1958 to 1985 at 15 tropical Pacific sites to estimate these errors and evaluate possible improvements. Observational uncertainties from Kiritimati, New Caledonia, and Rarotonga are 0.12-0.14‰, leading to errors of 8-25% on the typical ?18O variance. Multicoral syntheses using five to seven sites capture the principal components (PCs) well, but site selection dramatically influences ENSO spatial structure: Using sites in the eastern Pacific, western Pacific warm pool, and South Pacific Convergence Zone (SPCZ) captures "eastern Pacific-type" variability, while "Central Pacific-type" events are best observed by combining sites in the warm pool and SPCZ. The major obstacle to quantitative ENSO estimation is the ?18O/climate conversion, demonstrated by the large errors on both ?18O variance and the amplitude of the first principal component resulting from the use of commonly employed bivariate formulae to relate SST and SSS to ?18O. Errors likely arise from either the instrumental data used for pseudoproxy calibration or influences from other processes (?18O advection/atmospheric fractionation, etc.). At some sites, modeling seasonal changes to these influences reduces conversion errors by up to 20%. This indicates that understanding of past ENSO dynamics using coral ?18O could be greatly advanced by improving ?18O forward models.

Stevenson, S.; McGregor, H. V.; Phipps, S. J.; Fox-Kemper, Baylor

2013-12-01

378

A Systematic Review of Cross vs. Within Company Cost Estimation Studies

OBJECTIVE - The objective of this paper is to determine under what circumstances individual organisations would be able to rely on cross-company based estimation models. METHOD - We performed a systematic review of studies that compared predictions from cross- company models with predictions from within-company models based on analysis of project data. RESULTS - Ten papers compared cross-company and within-company

Barbara Kitchenham; Emilia Mendes; Guilherme H. Travassos

379

Noninferiority trial design and analyses are commonly used to establish the effectiveness of a new antimicrobial drug for treatment of serious infections such as complicated urinary tract infection (cUTI). A systematic review and meta-analysis were conducted to estimate the treatment effects of three potential active comparator drugs for the design of a noninferiority trial. The systematic review identified no placebo trials of cUTI, four clinical trials of cUTI with uncomplicated urinary tract infection as a proxy for placebo, and nine trials with reports of treatment effect estimates for doripenem, levofloxacin, or imipenem-cilastatin. In the meta-analysis, the primary efficacy endpoint of interest was the microbiological eradication rate at the test-of-cure visit in the microbiological intent-to-treat population. The estimated eradication rates and corresponding 95% confidence intervals (CI) were 31.8% (26.5% to 37.2%) for placebo, 81% (77.7% to 84.2%) for doripenem, 79% (75.9% to 82.2%) for levofloxacin, and 80.5% (71.9% to 89.1%) for imipenem-cilastatin. The treatment effect estimates were 40.5% for doripenem, 38.7% for levofloxacin, 34.7% for imipenem-cilastatin, and 40.8% overall. These treatment effect estimates can be used to inform the design and analysis of future noninferiority trials in cUTI study populations. PMID:23939900

Li, Gang; Mitrani-Gold, Fanny S.; Kurtinecz, Milena; Wetherington, Jeffrey; Tomayko, John F.; Mundy, Linda M.

2013-01-01

380

Noninferiority trial design and analyses are commonly used to establish the effectiveness of a new antimicrobial drug for treatment of serious infections such as complicated urinary tract infection (cUTI). A systematic review and meta-analysis were conducted to estimate the treatment effects of three potential active comparator drugs for the design of a noninferiority trial. The systematic review identified no placebo trials of cUTI, four clinical trials of cUTI with uncomplicated urinary tract infection as a proxy for placebo, and nine trials with reports of treatment effect estimates for doripenem, levofloxacin, or imipenem-cilastatin. In the meta-analysis, the primary efficacy endpoint of interest was the microbiological eradication rate at the test-of-cure visit in the microbiological intent-to-treat population. The estimated eradication rates and corresponding 95% confidence intervals (CI) were 31.8% (26.5% to 37.2%) for placebo, 81% (77.7% to 84.2%) for doripenem, 79% (75.9% to 82.2%) for levofloxacin, and 80.5% (71.9% to 89.1%) for imipenem-cilastatin. The treatment effect estimates were 40.5% for doripenem, 38.7% for levofloxacin, 34.7% for imipenem-cilastatin, and 40.8% overall. These treatment effect estimates can be used to inform the design and analysis of future noninferiority trials in cUTI study populations. PMID:23939900

Singh, Krishan P; Li, Gang; Mitrani-Gold, Fanny S; Kurtinecz, Milena; Wetherington, Jeffrey; Tomayko, John F; Mundy, Linda M

2013-11-01

381

NASA Astrophysics Data System (ADS)

We perform a joint inversion of Earth's geoid and dynamic topography for radial mantle viscosity structure using a number of models of interior density heterogeneities, including an assessment of the error budget. We identify three classes of errors: those related to the density perturbations used as input, those due to insufficiently constrained observables, and those due to the limitations of our analytical model. We estimate the amplitudes of these errors in the spectral domain. Our minimization function weights the squared deviations of the compared quantities with the corresponding errors, so that the components with more reliability contribute to the solution more strongly than less certain ones. We develop a quasi-analytical solution for mantle flow in a compressible, spherical shell with Newtonian rheology, allowing for continuous radial variations of viscosity, together with a possible reduction of viscosity within the phase change regions due to the effects of transformational superplasticity. The inversion reveals three distinct families of viscosity profiles, all of which have an order of magnitude stiffening within the lower mantle, with a soft D'' layer below. The main distinction among the families is the location of the lowest-viscosity region-directly beneath the lithosphere, just above 400km depth or just above 670km depth. All profiles have a reduction of viscosity within one or more of the major phase transformations, leading to reduced dynamic topography, so that whole-mantle convection is consistent with small surface topography.

Panasyuk, Svetlana V.; Hager, Bradford H.

2000-12-01

382

Background Presented is the method “Detection and Outline Error Estimates” (DOEE) for assessing rater agreement in the delineation of multiple sclerosis (MS) lesions. The DOEE method divides operator or rater assessment into two parts: 1) Detection Error (DE) -- rater agreement in detecting the same regions to mark, and 2) Outline Error (OE) -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI) values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR) images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA) of the raters' Region of Interests (ROIs). Results When correlated with MTA, neither DE (??=?.056, p=.83) nor the ratio of OE to MTA (??=?.23, p=.37), referred to as Outline Error Rate (OER), exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (??=?.75, p?

2012-01-01

383

Background Multidrug-resistant tuberculosis (MDR-TB) threatens to reverse recent reductions in global tuberculosis (TB) incidence. Although children under 15 years of age constitute >25% of the worldwide population, the global incidence of MDR-TB disease in children has never been quantified. Methods Our approach for estimating regional and global annual incidence of MDR-TB in children required development of two models: one to estimate the setting-specific risk of MDR-TB among child TB cases, and a second to estimate the setting-specific incidence of TB disease in children. The model for MDR-TB risk among children with TB required a systematic literature review. We multiplied the setting-specific estimates of MDR-TB risk and TB incidence to estimate regional and global incidence of MDR-TB disease in children in 2010. Findings We identified 3,403 papers, of which 97 studies met inclusion criteria for the systematic review of MDR-TB risk. Thirty-one studies reported the risk of MDR-TB among both children and treatment-naïve adults with TB and were used for evaluating the linear association between MDR-TB risk in these two patient groups. We found that the setting-specific risk of MDR-TB was nearly identical in children and treatment-naïve adults with TB, consistent with the assertion that MDR-TB in both groups reflects the local risk of transmitted MDR-TB. Applying these calculated risks, we estimated that around 1,000,000 (95% Confidence Interval: 938,000 – 1,055,000) children developed TB disease in 2010, among whom 32,000 (95% Confidence Interval: 26,000 – 39,000) had MDR-TB. Interpretation Our estimates highlight a massive detection gap for children with TB and MDR-TB disease. Future estimates can be refined as more and better TB data and new diagnostic tools become available. PMID:24671080

Jenkins, Helen E.; Tolman, Arielle W.; Yuen, Courtney M.; Parr, Jonathan B.; Keshavjee, Salmaan; Pérez-Vélez, Carlos M.; Pagano, Marcello; Becerra, Mercedes C.; Cohen, Ted

2014-01-01

384

Optimum data weighting and error calibration for estimation of gravitational parameters

NASA Technical Reports Server (NTRS)

A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

Lerch, Francis J.

1989-01-01

385

Optimum data weighting and error calibration for estimation of gravitational parameters

NASA Technical Reports Server (NTRS)

A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

Lerch, F. J.

1989-01-01

386

Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

NASA Technical Reports Server (NTRS)

Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

2013-01-01

387

NASA Technical Reports Server (NTRS)

The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (principal investigators)

1978-01-01

388

NASA Astrophysics Data System (ADS)

Predictions of the urban hydrologic response are of paramount importance to foresee floodings and sewer overflows and hence support sensible decision making. Due to several error sources models results are uncertain. Modeling statistically these uncertainties we can estimate how reliable predictions are. Most hydological studies in urban areas (e.g. Freni and Mannina, 2010) assume that residuals E are independent and identically distributed. These hypotheses are usually strongly violated due to neglected deficits in model structure and error in input data that lead to strong autocorrelation. We propose a new methodology to i) estimating the total uncertainty and ii) quantifying different type of errors affecting model results, videlicet, parametric, structural, input data, and calibration data uncertainty. Thereby we can make more realistic assumptions about the residuals. We consider the residual process to be a sum of an autocorrelated error term B and a memory-less uncertainty term E. As proposed by Reichert and Schuwirth (2012), B, called model inadequacy or bias, is described by a normally-distributed autoregressive process and accounts for structural deficiencies and errors in input measurement. The observation error E, is, instead, normally and independently distributed. Since urban watersheds are extremely responsive to precipitation events we modified this framework, making the bias input-dependent and transforming model results and data for residual variance stabilization. To show the improvement in uncertainty quantification we analyzed the response of a monitored stormwater system. We modeled the outlet discharge for several rain events by using a conceptual model. For comparison we computed the uncertainties with the traditional independent error model (e.g. Freni and Mannina, 2010). The quality of the prediction uncertainty bands were analyzed through residual diagnostics for the calibration phase and prediction coverage in the validation phase. The results of this study clearly show that input-dependent autocorrelated error model outperforms the independent residual representation. This is evident when comparing the fulfillment of the distribution assumptions of E. The bias error model produces realization of E that are much smaller (and so more realistic), less autocorrelated and heteroskedastic than with the current model. Furthermore, the proportion of validation data falling into the 95% credibility intervals is circa 15% higher accounting for bias than under the independence assumption. Our framework describing model bias appeared very promising in improving the fulfillment of the statistical assumptions and in decomposing predictive uncertainty. We believe that the proposed error model will be suitable for many applications because the computational expenses are only negligibly increased compared to the traditional approach. In future we will show how to use this approach with complex hydrodynamic models to further separate the effect structural deficits and input uncertainty. References P. Reichert and N. Schuwirth. 2012. Linking statistical bias description to multiobjective model calibration. Water Resources Research, 48, W09543, doi:10.1029/2011WR011391. G. Freni and G. Mannina. 2010. Bayesian approach for uncertainty quantification in water quality modelling: the influence of prior distribution. Journal of Hydrology, 392, 31-39, doi:10.1016/j.jhydrol.2010.07.043

Del Giudice, Dario; Reichert, Peter; Honti, Mark; Scheidegger, Andreas; Albert, Carlo; Rieckermann, Jörg

2013-04-01

389

Background A widely-used approach for screening nuclear DNA markers is to obtain sequence data and use bioinformatic algorithms to estimate which two alleles are present in heterozygous individuals. It is common practice to omit unresolved genotypes from downstream analyses, but the implications of this have not been investigated. We evaluated the haplotype reconstruction method implemented by PHASE in the context of phylogeographic applications. Empirical sequence datasets from five non-coding nuclear loci with gametic phase ascribed by molecular approaches were coupled with simulated datasets to investigate three key issues: (1) haplotype reconstruction error rates and the nature of inference errors, (2) dataset features and genotypic configurations that drive haplotype reconstruction uncertainty, and (3) impacts of omitting unresolved genotypes on levels of observed phylogenetic diversity and the accuracy of downstream phylogeographic analyses. Results We found that PHASE usually had very low false-positives (i.e., a low rate of confidently inferring haplotype pairs that were incorrect). The majority of genotypes that could not be resolved with high confidence included an allele occurring only once in a dataset, and genotypic configurations involving two low-frequency alleles were disproportionately represented in the pool of unresolved genotypes. The standard practice of omitting unresolved genotypes from downstream analyses can lead to considerable reductions in overall phylogenetic diversity that is skewed towards the loss of alleles with larger-than-average pairwise sequence divergences, and in turn, this causes systematic bias in estimates of important population genetic parameters. Conclusions A combination of experimental and computational approaches for resolving phase of segregating sites in phylogeographic applications is essential. We outline practical approaches to mitigating potential impacts of computational haplotype reconstruction on phylogeographic inferences. With targeted application of laboratory procedures that enable unambiguous phase determination via physical isolation of alleles from diploid PCR products, relatively little investment of time and effort is needed to overcome the observed biases. PMID:20429950

2010-01-01

390

A latent variable model approach to estimating systematic bias in the oversampling method.

The method of oversampling data from a preselected range of a variable's distribution is often applied by researchers who wish to study rare outcomes without substantially increasing sample size. Despite frequent use, however, it is not known whether this method introduces statistical bias due to disproportionate representation of a particular range of data. The present study employed simulated data sets to examine how oversampling introduces systematic bias in effect size estimates (of the relationship between oversampled predictor variables and the outcome variable), as compared with estimates based on a random sample. In general, results indicated that increased oversampling was associated with a decrease in the absolute value of effect size estimates. Critically, however, the actual magnitude of this decrease in effect size estimates was nominal. This finding thus provides the first evidence that the use of the oversampling method does not systematically bias results to a degree that would typically impact results in behavioral research. Examining the effect of sample size on oversampling yielded an additional important finding: For smaller samples, the use of oversampling may be necessary to avoid spuriously inflated effect sizes, which can arise when the number of predictor variables and rare outcomes is comparable. PMID:24142836

Hauner, Katherina K; Zinbarg, Richard E; Revelle, William

2014-09-01

391

Errors in the estimation of wall shear stress by maximum Doppler velocity

Objective Wall shear stress (WSS) is an important parameter with links to vascular (dys)function. Difficult to measure directly, WSS is often inferred from maximum spectral Doppler velocity (Vmax) by assuming fully-developed flow, which is valid only if the vessel is long and straight. Motivated by evidence that even slight/local curvatures in the nominally straight common carotid artery (CCA) prevent flow from fully developing, we investigated the effects of velocity profile skewing on Vmax-derived WSS. Methods Velocity profiles, representing different degrees of skewing, were extracted from the CCA of image-based computational fluid dynamics (CFD) simulations carried out as part of the VALIDATE study. Maximum velocities were calculated from idealized sample volumes and used to estimate WSS via fully-developed (Poiseuille or Womersley) velocity profiles, for comparison with the actual (i.e. CFD-derived) WSS. Results For cycle-averaged WSS, mild velocity profile skewing caused ±25% errors by assuming Poiseuille or Womersley profiles, while severe skewing caused a median error of 30% (maximum 55%). Peak systolic WSS was underestimated by ~50% irrespective of skewing with Poiseuille; using a Womersley profile removed this bias, but ±30% errors remained. Errors were greatest in late systole, when skewing was most pronounced. Skewing also introduced large circumferential WSS variations: ±60%, and up to ±100%, of the circumferentially averaged value. Conclusion Vmax-derived WSS may be prone to substantial variable errors related to velocity profile skewing, and cannot detect possibly large circumferential WSS variations. Caution should be exercised when making assumptions about velocity profile shape to calculate WSS, even in vessels usually considered long and straight. PMID:23398945

Mynard, Jonathan P.; Wasserman, Bruce A.; Steinman, David A.

2015-01-01

392

On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets

NASA Astrophysics Data System (ADS)

There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.

Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.

2001-05-01

393

Estimating random errors due to shot noise in backscatter lidar observations.

We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment. PMID:16778954

Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

2006-06-20

394

Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

NASA Technical Reports Server (NTRS)

In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

2006-01-01

395

NASA Technical Reports Server (NTRS)

Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. The use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio is investigated.

Lakshminarayanan, M. Y.; Gunst, R. F.

1984-01-01

396

Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review

Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm. PMID:22778632

Yang, Shuozhi; Li, Qingguo

2012-01-01

397

In this paper we consider (hierarchical, La-grange)reduced basis approximation anda posteriori error estimation for linear functional outputs of affinely parametrized elliptic coercive partial differential equa-tions.\\u000a The essential ingredients are (primal-dual)Galer-kin projection onto a low-dimensional space associated with a smooth “parametric\\u000a manifold” - dimension re-duction; efficient and effective greedy sampling meth-ods for identification of optimal and numerically\\u000a stable approximations - rapid

G. Rozza; D. B. P. Huynh; A. T. Patera

2007-01-01

398

Methods used to estimate the size of the owned cat and dog population: a systematic review

Background There are a number of different methods that can be used when estimating the size of the owned cat and dog population in a region, leading to varying population estimates. The aim of this study was to conduct a systematic review to evaluate the methods that have been used for estimating the sizes of owned cat and dog populations and to assess the biases associated with those methods. A comprehensive, systematic search of seven electronic bibliographic databases and the Google search engine was carried out using a range of different search terms for cats, dogs and population. The inclusion criteria were that the studies had involved owned or pet domestic dogs and/or cats, provided an estimate of the size of the owned dog or cat population, collected raw data on dog and cat ownership, and analysed primary data. Data relating to study methodology were extracted and assessed for biases. Results Seven papers were included in the final analysis. Collection methods used to select participants in the included studies were: mailed surveys using a commercial list of contacts, door to door surveys, random digit dialled telephone surveys, and randomised telephone surveys using a commercial list of numbers. Analytical and statistical methods used to estimate the pet population size were: mean number of dogs/cats per household multiplied by the number of households in an area, human density multiplied by number of dogs per human, and calculations using predictors of pet ownership. Conclusion The main biases of the studies included selection bias, non-response bias, measurement bias and biases associated with length of sampling time. Careful design and planning of studies is a necessity before executing a study to estimate pet populations. PMID:23777563

2013-01-01

399

ERIC Educational Resources Information Center

The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

Paek, Insu; Cai, Li

2014-01-01

400

Adaptive Monte Carlo applied to uncertainty estimation in five axis machine tool link errorsÂ´erieure de Cachan, 61 Avenue du PrÂ´esident Wilson, 94230 Cachan, France Abstract Knowledge of a machine tool axis machine tool. The identification is based on volumetric error measurements for different poses

Paris-Sud XI, UniversitÃ© de

401

Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON, FRANCE; Vassilevski, Yuri [RUSSIA

2009-01-01

402

NASA Astrophysics Data System (ADS)

An error control technique aimed to assess the quality of smoothed finite element approximations is presented in this paper. Finite element techniques based on strain smoothing appeared in 2007 were shown to provide significant advantages compared to conventional finite element approximations. In particular, a widely cited strength of such methods is improved accuracy for the same computational cost. Yet, few attempts have been made to directly assess the quality of the results obtained during the simulation by evaluating an estimate of the discretization error. Here we propose a recovery type error estimator based on an enhanced recovery technique. The salient features of the recovery are: enforcement of local equilibrium and, for singular problems a "smooth + singular" decomposition of the recovered stress. We evaluate the proposed estimator on a number of test cases from linear elastic structural mechanics and obtain efficient error estimations whose effectivities, both at local and global levels, are improved compared to recovery procedures not implementing these features.

González-Estrada, Octavio A.; Natarajan, Sundararajan; Ródenas, Juan José; Nguyen-Xuan, Hung; Bordas, Stéphane P. A.

2013-07-01

403

Optimum data weighting and error calibration for estimation of gravitational parameters

NASA Technical Reports Server (NTRS)

A new approach has been developed for determining consistent satellite-tracking data weights in solutions for the satellite-only gravitational models. The method employs subset least-squares solutions of the satellite data contained within the complete solution and requires that the differences of the parameters of subset solutions and the complete solution to be in agreement with their error estimates by adjusting the data weights. GEM-T2 model was recently computed and adjusted through a direct application of this method. The estimated data weights are markedly smaller than the weights implied by the formal uncertainties of the measurements. Orbital arc tests as well as surface gravity comparisons show significant improvements for solutions when more realistic data weighting is achieved.

Lerch, Francis J.

1991-01-01

404

Prediction and standard error estimation for a finite universe total when a stratum is not sampled

In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

Wright, T.

1994-01-01

405

Quantum tomography via compressed sensing: error bounds, sample complexity and efficient estimators

NASA Astrophysics Data System (ADS)

Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. Firstly, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e. the sample complexity of tomography decreases with the rank. Secondly, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. In this paper, we present a new theoretical analysis of compressed tomography, based on the restricted isometry property for low-rank matrices. Using these tools, we obtain near-optimal error bounds for the realistic situation where the data contain noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators—the matrix Dantzig selector and the matrix Lasso—with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low-rank estimate using direct fidelity estimation, and describe a method for compressed quantum process tomography that works for processes with small Kraus rank and requires only Pauli eigenstate preparations and Pauli measurements.

Flammia, Steven T.; Gross, David; Liu, Yi-Kai; Eisert, Jens

2012-09-01

406

Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted. PMID:12805897

Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

2002-06-01

407

Application of parameter estimation to aircraft stability and control: The output-error approach

NASA Technical Reports Server (NTRS)

The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.

Maine, Richard E.; Iliff, Kenneth W.

1986-01-01

408

Use of Expansion Factors to Estimate the Burden of Dengue in Southeast Asia: A Systematic Analysis

Background Dengue virus infection is the most common arthropod-borne disease of humans and its geographical range and infection rates are increasing. Health policy decisions require information about the disease burden, but surveillance systems usually underreport the total number of cases. These may be estimated by multiplying reported cases by an expansion factor (EF). Methods and Findings As a key step to estimate the economic and disease burden of dengue in Southeast Asia (SEA), we projected dengue cases from 2001 through 2010 using EFs. We conducted a systematic literature review (1995–2011) and identified 11 published articles reporting original, empirically derived EFs or the necessary data, and 11 additional relevant studies. To estimate EFs for total cases in countries where no empirical studies were available, we extrapolated data based on the statistically significant inverse relationship between an index of a country's health system quality and its observed reporting rate. We compiled an average 386,000 dengue episodes reported annually to surveillance systems in the region, and projected about 2.92 million dengue episodes. We conducted a probabilistic sensitivity analysis, simultaneously varying the most important parameters in 20,000 Monte Carlo simulations, and derived 95% certainty level of 2.73–3.38 million dengue episodes. We estimated an overall EF in SEA of 7.6 (95% certainty level: 7.0–8.8) dengue cases for every case reported, with an EF range of 3.8 for Malaysia to 19.0 in East Timor. Conclusion Studies that make no adjustment for underreporting would seriously understate the burden and cost of dengue in SEA and elsewhere. As the sites of the empirical studies we identified were not randomly chosen, the exact extent of underreporting remains uncertain. Nevertheless, the results reported here, based on a systematic analysis of the available literature, show general consistency and provide a reasonable empirical basis to adjust for underreporting. PMID:23437407

Undurraga, Eduardo A.; Halasa, Yara A.; Shepard, Donald S.

2013-01-01