These are representative sample records from Science.gov related to your search topic.
For comprehensive and current results, perform a real-time search at Science.gov.
1

Effects of systematic error, estimates and uncertainties in chemical mass balance apportionments: Quail Roost II revisited  

NASA Astrophysics Data System (ADS)

The Quail Roost II synthetic data set II was used to derive a comprehensive method of estimating uncertainties for chemical mass balance (CMB) apportionments. Collinearity-diagnostic procedures were applied to CMB apportionments of data set II to identify seriously collinear source profiles and evaluate the effects of the degree of collinearity on source-strength estimates and their uncertainties. Fractional uncertainties of CMB estimates were up to three times higher for collinear source profiles than for independent ones. A theoretical analysis of CMB results for synthetic data set II led to the following general conclusions about CMB methodology. Uncertainties for average estimated source strengths will be unrealistically low unless sources whose estimates are constrained to zero are included when calculating uncertainties. Covariance in source-strength estimates is caused by collinearity and systematic errors in source specification and composition. Propagated uncertainties may be underestimated unless covariances as well as variances of estimates are included. Apportioning the average aerosol will account for systematic errors only when the correct model is known, when measurement uncertainties in ambient and source-profile data are realistic, and when the source profiles are not collinear.

Lowenthal, Douglas H.; Hanumara, R. Choudary; Rahn, Kenneth A.; Currie, Lloyd A.

2

GPS meteorology: Reducing systematic errors in geodetic estimates for zenith delay  

Microsoft Academic Search

Differences between long term precipitable water (PW) time series derived from radiosondes, microwave water vapor radiometers, and GPS stations reveal offsets that are often as much as 1-2 mm PW. All three techniques are thought to suffer from systematic errors of order 1 mm PW. Standard GPS processing algorithms are known to be sensitive to the choice of elevation cutoff

Peng Fang; Michael Bevis; Yehuda Bock; Seth Gutman; Dan Wolfe

1998-01-01

3

Systematic errors in temperature estimates from MODIS data covering the western Palearctic and their impact on a parasite development model.  

PubMed

The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 °C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles. PMID:24258878

Alonso-Carné, Jorge; García-Martín, Alberto; Estrada-Peña, Agustin

2013-11-01

4

Error probability bounds for systematic convolutional codes  

Microsoft Academic Search

Upper and lower bounds on error probability for systematic convolutional codes are obtained. These bounds show that the probability of error for a systematic convolutional code is larger than the probability of error for a nonsystematic convolutional code of the same encoder-shift-register length. Two different interpretations of these results are given.

E. Bucher; J. Heller

1970-01-01

5

Estimating Bias Error Distributions  

NASA Technical Reports Server (NTRS)

This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

Liu, Tian-Shu; Finley, Tom D.

2001-01-01

6

Systematic Errors in an Air Track Experiment.  

ERIC Educational Resources Information Center

Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

Ramirez, Santos A.; Ham, Joe S.

1990-01-01

7

Systematic Errors in Current Quantum State Tomography Tools  

NASA Astrophysics Data System (ADS)

Common tools for obtaining physical density matrices in experimental quantum state tomography are shown here to cause systematic errors. For example, using maximum likelihood or least squares optimization to obtain physical estimates for the quantum state, we observe a systematic underestimation of the fidelity and an overestimation of entanglement. Such strongly biased estimates can be avoided using linear evaluation of the data or by linearizing measurement operators yielding reliable and computational simple error bounds.

Schwemmer, Christian; Knips, Lukas; Richart, Daniel; Weinfurter, Harald; Moroder, Tobias; Kleinmann, Matthias; Gühne, Otfried

2015-02-01

8

Measuring Systematic Error with Curve Fits  

ERIC Educational Resources Information Center

Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

Rupright, Mark E.

2011-01-01

9

Errors in estimates of peritoneal fluid volume.  

PubMed

Inherent limitations in the suitability of drainage volumes for monitoring intraperitoneal fluid volume have resulted in the frequent use of indicator dilution techniques, but little attention has been given to confirming the adequacy of the estimates that volume markers provide. In a series of experimental exchanges in rats, volume estimates were compared based on the dilution of blue dextran and hemoglobin with direct collections of surgically exposed intraperitoneal fluid. Significant systematic and random errors in the indicator dilution volume estimates were observed. The systematic errors appeared to be due to the rapid removal of a fixed amount of marker from peritoneal fluid, while the random errors were caused by the rapid appearance of a variable amount of endogenous chromogen. The behavior of the markers observed in this study was not consistent with the assumptions commonly used to analyze volume transport in peritoneal dialysis. PMID:1697746

Daniels, F H; Nedev, N D; Lowe, L C; Leonard, E F; Cortell, S

1990-08-01

10

Antenna pointing systematic error model derivations  

NASA Technical Reports Server (NTRS)

The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

Guiar, C. N.; Lansing, F. L.; Riggs, R.

1987-01-01

11

Systematic errors in long baseline oscillation experiments  

SciTech Connect

This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

Harris, Deborah A.; /Fermilab

2006-02-01

12

Mars gravitational field estimation error  

NASA Technical Reports Server (NTRS)

The error covariance matrices associated with a weighted least-squares differential correction process have been analyzed for accuracy in determining the gravitational coefficients through degree and order five in the Mars gravitational potential junction. The results are presented in terms of standard deviations for the assumed estimated parameters. The covariance matrices were calculated by assuming Doppler tracking data from a Mars orbiter, a priori statistics for the estimated parameters, and model error uncertainties for tracking-station locations, the Mars ephemeris, the astronomical unit, the Mars gravitational constant (G sub M), and the gravitational coefficients of degrees six and seven. Model errors were treated by using the concept of consider parameters.

Compton, H. R.; Daniels, E. F.

1972-01-01

13

Use of a mass-thickness marker to estimate systematic errors and statistical noise in the detection of phosphorus by electron spectroscopic imaging.  

PubMed

The element signal obtained from electron-energy-filtered micrographs depends on the systematic error in calculating the background and on the noise in the background-corrected image. Both systematic error and statistical fluctuation of the background can be assessed experimentally with a specimen that combines the element-containing feature with a mass-thickness marker. The approach is described for the mapping of phosphorus in turnip yellow mosaic viruses prepared on a supporting carbon film of variable thickness. The thickness modulations are produced by the additional deposition of heat-evaporated carbon through a second grid used as a mask. The three-window power-law method and the two-window difference method are compared. With the three-window power-law method, the mass-thickness modulations of the marker are still visible in the map, indicating a systematic error for the calculated background. In addition, the intensity profile over the area of the thick carbon film is broader than in the map corrected by the two-window method, indicating a higher level of noise. With the two-window difference method, mass-thickness contrast was practically eliminated due to an improved protocol that uses the mass-thickness marker to calculate the scaling factor: instead of scaling the grey-level of a single background feature, the pre-edge image is scaled to the contrast of the marker area in the image acquired at the element-specific energy loss. PMID:9519469

Richter, K; Haking, A; Troester, H; Spiess, E; Spring, H; Probst, W; Schultz, P; Witz, J; Trendelenburg, M

1997-10-01

14

Significance in Gamma Ray Astronomy with Systematic Errors  

E-print Network

The influence of systematic errors on the calculation of the statistical significance of a $\\gamma$-ray signal with the frequently invoked Li and Ma method is investigated. A simple criterion is derived to decide whether the Li and Ma method can be applied in the presence of systematic errors. An alternative method is discussed for cases where systematic errors are too large for the application of the original Li and Ma method. This alternative method reduces to the Li and Ma method when systematic errors are negligible. Finally, it is shown that the consideration of systematic errors will be important in many analyses of data from the planned Cherenkov Telescope Array.

Spengler, Gerrit

2015-01-01

15

Analysis of Systematic Errors in the Computation of Exposure Distributions from Brachytherapy Sources  

Microsoft Academic Search

Systematic errors in the basic physical constants and algorithms used in calculating the exposure from brachytherapy sources have been investigated. The specific gamma ray constant for radium filtered by 0.5 mm of platinum, 10% irridium, is the fundamental constant in this field. It has been measured a number of times over the years, with estimated bounds for the systematic error

Martin Rozenfeld

1980-01-01

16

More on Systematic Error in a Boyle's Law Experiment  

ERIC Educational Resources Information Center

A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

McCall, Richard P.

2012-01-01

17

More on Systematic Error in a Boyle's Law Experiment  

NASA Astrophysics Data System (ADS)

A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

McCall, Richard P.

2012-01-01

18

Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.  

PubMed

Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory. PMID:23967914

Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith

2013-09-01

19

Determination of acceptable level of observational systematic errors in construction of probable motion regions of small bodies. (Russian Title: ??????????? ??????????? ?????? ??????????????? ?????? ?????????? ??? ?????????? ???????? ????????? ???????? ????? ???)  

NASA Astrophysics Data System (ADS)

Influence of observational systematic errors and motion model errors on to precision of construction of probable motion regions of small bodies is investigated. Estimation of maximum allowed level for systematic error component when LS-hyperellipsoids always contained object of interest were obtained

Chernitsov, A. M.; Tamarov, V. A.

2003-12-01

20

Error estimation for boundary element method  

Microsoft Academic Search

In this paper, six error indicators obtained from dual boundary integral equations are used for local estimation, which is an essential ingredient for all adaptive mesh schemes in BEM. Computational experiments are carried out for the two-dimensional Laplace equation. The curves of all these six error estimators are in good agreement with the shape of the error curve. The results

M. T. Liang; J. T. Chen; S. S. Yang

1999-01-01

21

Improved Systematic Pointing Error Model for the DSN Antennas  

NASA Technical Reports Server (NTRS)

New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

2011-01-01

22

A systematic approach to SER estimation and solutions  

Microsoft Academic Search

This paper describes a method for estimating Soft Error Rate (SER) and a systematic approach to identifying SER solutions. Having a good SER estimate is the first step in identifying if a problem exists and what measures are necessary to solve the problem. In this paper, a high performance processor is used as the base framework for discussion since it

H. T. Nguyen; Y. Yagil

2003-01-01

23

Adjoint Error Estimation for Linear Advection  

SciTech Connect

An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

2011-03-30

24

Event Generator Validation and Systematic Error Evaluation for Oscillation Experiments  

NASA Astrophysics Data System (ADS)

In this document I will describe the validation and tuning of the physics models in the GENIE neutrino event generator and briefly discuss how oscillation experiments make use of this information in the evaluation of model-related systematic errors.

Gallagher, H.

2009-09-01

25

Systematic reviews, systematic error and the acquisition of clinical knowledge  

Microsoft Academic Search

BACKGROUND: Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types

Steffen Mickenautsch

2010-01-01

26

Systematic Parameter Errors in Inspiraling Neutron Star Binaries  

NASA Astrophysics Data System (ADS)

The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

Favata, Marc

2014-03-01

27

Systematic parameter errors in inspiraling neutron star binaries.  

PubMed

The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276

Favata, Marc

2014-03-14

28

Errors in quantum tomography: diagnosing systematic versus statistical errors  

NASA Astrophysics Data System (ADS)

A prime goal of quantum tomography is to provide quantitatively rigorous characterization of quantum systems, be they states, processes or measurements, particularly for the purposes of trouble-shooting and benchmarking experiments in quantum information science. A range of techniques exist to enable the calculation of errors, such as Monte-Carlo simulations, but their quantitative value is arguably fundamentally flawed without an equally rigorous way of authenticating the quality of a reconstruction to ensure it provides a reasonable representation of the data, given the known noise sources. A key motivation for developing such a tool is to enable experimentalists to rigorously diagnose the presence of technical noise in their tomographic data. In this work, I explore the performance of the chi-squared goodness-of-fit test statistic as a measure of reconstruction quality. I show that its behaviour deviates noticeably from expectations for states lying near the boundaries of physical state space, severely undermining its usefulness as a quantitative tool precisely in the region which is of most interest in quantum information processing tasks. I suggest a simple, heuristic approach to compensate for these effects and present numerical simulations showing that this approach provides substantially improved performance.

Langford, Nathan K.

2013-03-01

29

Estimating Bias Errors in the GPCP Monthly Precipitation Product  

NASA Astrophysics Data System (ADS)

Climatological data records are important to understand global and regional variations and trends. The Global Precipitation Climatology Project (GPCP) record of monthly, globally complete precipitation analyses stretches back to 1979 and is based on a merger of both satellite and surface gauge records. It is a heavily used data record—cited in over 1500 journal papers. It is important that these types of data records also include information about the uncertainty of the presented estimates. Indeed the GPCP monthly analysis already includes estimates of the random error, due to algorithm and sampling random error, associated with each gridded, monthly value (Huffman, 1997). It is also important to include estimates of bias error, i.e., the uncertainty of the monthly value (or climatology) in terms of its absolute value. Results are presented based on a procedure (Adler et al., 2012) to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources and merged products. The GPCP monthly product is used as a base precipitation estimate, with other input products included when they are within ±50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation (?) of the included products is then taken to be the estimated systematic, or bias, error. The results allow us to first examine monthly climatologies and the annual climatology producing maps of estimated bias errors, zonal mean errors and estimated errors over large areas, such as ocean and land for both the tropics and for the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where we should be more or less confident of our mean precipitation estimates. In the tropics, relative bias error estimates (?/?, where ? is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, compared to 10-15% in the western Pacific part of the ITCZ. Examining latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold season errors at high latitudes due to snow. Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, considered to be an upper bound due to lack of sign-of-the-error cancelling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of our current state of knowledge of the planet's mean precipitation. The bias uncertainty procedure is being extended so that it can be applied to individual months (e.g., January 1998) on the GPCP grid (2.5 degree latitude-longitude). Validation of the bias estimates and the monthly random error estimates will also be presented.

Sapiano, M. R.; Adler, R. F.; Gu, G.; Huffman, G. J.

2012-12-01

30

Treatment of systematic errors in land data assimilation systems  

NASA Astrophysics Data System (ADS)

Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

Crow, W. T.; Yilmaz, M.

2012-12-01

31

Jason-2 systematic error analysis in the GPS derived orbits  

NASA Astrophysics Data System (ADS)

Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced dynamic versus dynamic orbit differences are used to characterize the remaining force model error and TRF instability. At first, we quantify the effect of a North/South displacement of the tracking reference points for each of the three techniques. We then compare these results to the study of Morel and Willis (2005) and Ceri et al. (2010). We extend the analysis to the most recent Jason-2 cycles. We evaluate the GPS vs SLR & DORIS orbits produced using the GEODYN.

Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

2011-12-01

32

Systematic model forecast error in Rossby wave structure  

NASA Astrophysics Data System (ADS)

Diabatic processes can alter Rossby wave structure; consequently, errors arising from model processes propagate downstream. However, the chaotic spread of forecasts from initial condition uncertainty renders it difficult to trace back from root-mean-square forecast errors to model errors. Here diagnostics unaffected by phase errors are used, enabling investigation of systematic errors in Rossby waves in winter season forecasts from three operational centers. Tropopause sharpness adjacent to ridges decreases with forecast lead time. It depends strongly on model resolution, even though models are examined on a common grid. Rossby wave amplitude reduces with lead 5 days, consistent with underrepresentation of diabatic modification and transport of air from the lower troposphere into upper tropospheric ridges, and with too weak humidity gradients across the tropopause. However, amplitude also decreases when resolution is decreased. Further work is necessary to isolate the contribution from errors in the representation of diabatic processes.

Gray, S. L.; Dunning, C. M.; Methven, J.; Masato, G.; Chagnon, J. M.

2014-04-01

33

Neutrino spectrum at the far detector systematic errors  

SciTech Connect

Neutrino oscillation experiments often employ two identical detectors to minimize errors due to inadequately known neutrino beam. We examine various systematics effects related to the prediction of the neutrino spectrum in the `far' detector on the basis of the spectrum observed at the `near' detector. We propose a novel method of the derivation of the far detector spectrum. This method is less sensitive to the details of the understanding of the neutrino beam line and the hadron production spectra than the usually used `double ratio' method thus allowing to reduce the systematic errors.

Szleper, M.; Para, A.

2001-10-01

34

Systematic errors in power measurements made with a dual six-port ANA  

NASA Astrophysics Data System (ADS)

The systematic error in measuring power with a dual 6-port Automatic Network Analyzer was determined. Equations for estimating systematic errors due to imperfections in the test port connector, imperfections in the connector on the power standard, and imperfections in the impedance standards used to calibrate the 6-port for measuring reflection coefficient were developed. These are the largest sources of error associated with the 6-port. For 7 mm connectors, all systematic errors which are associated with the 6-port add up to a worst-case uncertainty of + or - 0.00084 in measuring the ratio of the effective efficiency of a bolometric power sensor relative to that of a standard power sensor.

Hoer, Cletus A.

1989-07-01

35

MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND  

SciTech Connect

We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

Zhang Le; Timbie, Peter [Department of Physics, University of Wisconsin, Madison, WI 53706 (United States); Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S. [Department of Physics, Brown University, 182 Hope Street, Providence, RI 02912 (United States); Sutter, Paul M.; Wandelt, Benjamin D. [Department of Physics, 1110 W Green Street, University of Illinois at Urbana-Champaign, Urbana, IL 61801 (United States); Bunn, Emory F., E-mail: lzhang263@wisc.edu [Physics Department, University of Richmond, Richmond, VA 23173 (United States)

2013-06-01

36

Maximum Likelihood Analysis of Systematic Errors in Interferometric Observations of the Cosmic Microwave Background  

NASA Astrophysics Data System (ADS)

We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0.°7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at ~10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2? upper limit of the tensor-to-scalar ratio r by ~10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

Zhang, Le; Karakci, Ata; Sutter, Paul M.; Bunn, Emory F.; Korotkov, Andrei; Timbie, Peter; Tucker, Gregory S.; Wandelt, Benjamin D.

2013-06-01

37

Bayes Error Rate Estimation Using Classifier Ensembles  

NASA Technical Reports Server (NTRS)

The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

Tumer, Kagan; Ghosh, Joydeep

2003-01-01

38

Conditional Density Estimation in Measurement Error Problems.  

PubMed

This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

Wang, Xiao-Feng; Ye, Deping

2015-01-01

39

Equilibrated error estimators for discontinuous Galerkin methods  

Microsoft Academic Search

We consider some difiusion problems in domains ofRd, d = 2 or 3 approximated by a discontinuous Galerkin method with polynomials of any degree. We propose a new a posteriori error estimator based on H(div)- conforming elements. It is shown that this estimator gives rise to an upper bound where the constant is one up to higher order terms. The

Sarah Cochez-Dhondt; Serge Nicaisey

2008-01-01

40

The Effect of Systematic Error in Forced Oscillation Testing  

NASA Technical Reports Server (NTRS)

One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

2012-01-01

41

Efficient error estimation in quantum key distribution  

NASA Astrophysics Data System (ADS)

In a quantum key distribution (QKD) system, the error rate needs to be estimated for determining the joint probability distribution between legitimate parties, and for improving the performance of key reconciliation. We propose an efficient error estimation scheme for QKD, which is called parity comparison method (PCM). In the proposed method, the parity of a group of sifted keys is practically analysed to estimate the quantum bit error rate instead of using the traditional key sampling. From the simulation results, the proposed method evidently improves the accuracy and decreases revealed information in most realistic application situations. Project supported by the National Basic Research Program of China (Grant Nos.2011CBA00200 and 2011CB921200) and the National Natural Science Foundation of China (Grant Nos.61101137, 61201239, and 61205118).

Li, Mo; Treeviriyanupab, Patcharapong; Zhang, Chun-Mei; Yin, Zhen-Qiang; Chen, Wei; Han, Zheng-Fu

2015-01-01

42

Spatial reasoning in the treatment of systematic sensor errors  

SciTech Connect

In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

1988-01-01

43

Reducing systematic errors in measurements made by a SQUID magnetometer  

NASA Astrophysics Data System (ADS)

A simple method is described which reduces those systematic errors of a superconducting quantum interference device (SQUID) magnetometer that arise from possible radial displacements of the sample in the second-order gradiometer superconducting pickup coil. By rotating the sample rod (and hence the sample) around its axis into a position where the best fit is obtained to the output voltage of the SQUID as the sample is moved through the pickup coil, the accuracy of measuring magnetic moments can be increased significantly. In the cases of an examined Co1.9Fe1.1Si Heusler alloy, pure iron and nickel samples, the accuracy could be increased over the value given in the specification of the device. The suggested method is only meaningful if the measurement uncertainty is dominated by systematic errors - radial displacement in particular - and not by instrumental or environmental noise.

Kiss, L. F.; Kaptás, D.; Balogh, J.

2014-11-01

44

Geodesy by radio interferometry: Effects of atmospheric modeling errors on estimates of baseline length  

Microsoft Academic Search

Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for  8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere (\\

J. L. Davis; T. A. Herrinch; I. I. Shapiro; A. E. E. Rollers; G. Elgered

1985-01-01

45

Estimation of Error Rates in Discriminant Analysis  

Microsoft Academic Search

Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of misclassification for different combinations of sample sizes and different numbers of parameters. The two methods in most common use are found to be significantly poorer than some new methods that are proposed.

Peter A. Lachenbruch; M. Ray Mickey

1968-01-01

46

Patient safety strategies targeted at diagnostic errors: a systematic review.  

PubMed

Missed, delayed, or incorrect diagnosis can lead to inappropriate patient care, poor patient outcomes, and increased cost. This systematic review analyzed evaluations of interventions to prevent diagnostic errors. Searches used MEDLINE (1966 to October 2012), the Agency for Healthcare Research and Quality's Patient Safety Network, bibliographies, and prior systematic reviews. Studies that evaluated any intervention to decrease diagnostic errors in any clinical setting and with any study design were eligible, provided that they addressed a patient-related outcome. Two independent reviewers extracted study data and rated study quality. There were 109 studies that addressed 1 or more intervention categories: personnel changes (n = 6), educational interventions (n = 11), technique (n = 23), structured process changes (n = 27), technology-based systems interventions (n = 32), and review methods (n = 38). Of 14 randomized trials, which were rated as having mostly low to moderate risk of bias, 11 reported interventions that reduced diagnostic errors. Evidence seemed strongest for technology-based systems (for example, text message alerting) and specific techniques (for example, testing equipment adaptations). Studies provided no information on harms, cost, or contextual application of interventions. Overall, the review showed a growing field of diagnostic error research and categorized and identified promising interventions that warrant evaluation in large studies across diverse settings. PMID:23460094

McDonald, Kathryn M; Matesic, Brian; Contopoulos-Ioannidis, Despina G; Lonhart, Julia; Schmidt, Eric; Pineda, Noelle; Ioannidis, John P A

2013-03-01

47

ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES  

SciTech Connect

In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)

2012-05-20

48

Investigating systematic errors in gravity field data using wavelet decomposition  

NASA Astrophysics Data System (ADS)

A wavelet decomposition method is proposed to investigate systematic errors in airborne and terrestrial gravity data over the southern Unites States. Recently, numerous studies have focused on applying a wavelet approach to regional gravity field modeling in order to enable multi-resolution capabilities and overcome inherent disadvantages of existing methods (Fourier and spherical harmonics). The localization of wavelets in frequency and space is beneficial to avoid smearing of signals and to connect the spatial and frequency domains. In this study, wavelet decomposition methods are used along with the third generation GOCE models to detect and correct long wavelength systematic errors existing in airborne and terrestrial free-air anomalies in the southern and south-eastern US, from New Mexico to the east coast (27N to 36N and 107W to 77W). The terrestrial dataset shows a small bias of 1mGal, while the airborne dataset has a bias of approximately 5.5mGal. The results show that 80% of the terrestrial bias is located at long wavelengths (>300km), while the airborne dataset has a strong bias at long wavelengths (2.7mGal at >300km) and a smaller bias at intermediate wavelengths (0.5mGal at 150-300km and 1mGal at 80km-150km). Based on the results, a wavelet decomposition method for correcting long wavelengths systematic errors in airborne and terrestrial datasets looks promising since medium and short wavelength signals are kept intact. The corrected airborne and terrestrial datasets can then be fused with satellite datasets using wavelet, Fourier or spherical harmonics methods to derive combined gravity models.

Bolkas, D.; Fotopoulos, G.; Braun, A.

2013-05-01

49

Galaxy assembly bias: a significant source of systematic error in the galaxy-halo relationship  

NASA Astrophysics Data System (ADS)

Methods that exploit galaxy clustering to constrain the galaxy-halo relationship, such as the halo occupation distribution (HOD) and conditional luminosity function (CLF), assume halo mass alone suffices to determine a halo's galaxy content. Yet, halo clustering strength depends upon properties other than mass, such as formation time, an effect known as assembly bias. If galaxy characteristics are correlated with these auxiliary halo properties, the basic assumption of standard HOD/CLF methods is violated. We estimate the potential for assembly bias to induce systematic errors in inferred halo occupation statistics. We construct realistic mock galaxy catalogues that exhibit assembly bias as well as companion mock catalogues with identical HODs, but with assembly bias removed. We fit HODs to the galaxy clustering in each catalogue. In the absence of assembly bias, the inferred HODs describe the true HODs well, validating the methodology. However, in all cases with assembly bias, the inferred HODs exhibit significant systematic errors. We conclude that the galaxy-halo relationship inferred from galaxy clustering is subject to significant systematic errors induced by assembly bias. Efforts to model and/or constrain assembly bias should be priorities as assembly bias is a threatening source of systematic error in galaxy evolution and precision cosmology studies.

Zentner, Andrew R.; Hearin, Andrew P.; van den Bosch, Frank C.

2014-10-01

50

Error models for quantum state and parameter estimation  

NASA Astrophysics Data System (ADS)

Within the field of Quantum Information Processing, we study two subjects: For quantum state tomography, one common assumption is that the experimentalist possesses a stationary source of identical states. We challenge this assumption and propose a method to detect and characterize the drift of nonstationary quantum sources. We distinguish diffusive and systematic drifts and examine how quickly one can determine that a source is drifting. Finally, we give an implementation of this proposed measurement for single photons. For quantum computing, fault-tolerant protocols assume that errors are of certain types. But how do we detect errors of the wrong type? The problem is that for large quantum states, a full state description is impossible to analyze, and so one cannot detect all types of errors. We show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. Our example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size. This dissertation includes previously published co-authored material.

Schwarz, Lucia

51

Statistical and systematic errors in redshift-space distortion measurements from large surveys  

NASA Astrophysics Data System (ADS)

We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate ? = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ?(rp, ?) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on ? as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

2012-12-01

52

Ultraspectral Sounding Retrieval Error Budget and Estimation  

NASA Technical Reports Server (NTRS)

The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

2011-01-01

53

TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW  

PubMed Central

Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

2012-01-01

54

Standard errors of parameter estimates in the ETAS model  

E-print Network

1 Standard errors of parameter estimates in the ETAS model Abstract Point process models of seismic catalogs and in short- term earthquake forecasting. The standard errors of parameter estimates of conventional standard error estimates based on the Hessian matrix of the log- likelihood function of the ETAS

Schoenberg, Frederic Paik (Rick)

55

Short Communication DNA Sequence Error Rates in Genbank Records Estimated  

E-print Network

Short Communication DNA Sequence Error Rates in Genbank Records Estimated using the Mouse Genome estimate DNA sequence error rates in Genbank records containing protein-coding and non-coding DNA sequences in Genbank accessions. The estimated single nucleotide error rate for intronic DNA sequences (0.22%; SE 0

Keightley, Peter

56

Analysis of Systematic Errors in the MuLan Muon Lifetime Experiment  

NASA Astrophysics Data System (ADS)

The MuLan experiment seeks to measure the muon lifetime to 1 ppm. To achieve this level of precision a multitude of systematic errors must be investigated. Analysis of the 2004 data set has been completed, resulting in a total error of 11 ppm(10 ppm statistical, 5 ppm systematic). Data obtained in 2006 are currently being analyzed with an expected statistical error of 1.3 ppm. This talk will discuss the methods used to study and reduce the systematic errors for the 2004 data set and improvements for the 2006 data set which should reduce the systematic errors even further.

McNabb, Ronald

2007-04-01

57

Using ridge regression in systematic pointing error corrections  

NASA Technical Reports Server (NTRS)

A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

Guiar, C. N.

1988-01-01

58

Rigorous Error Estimates for Reynolds' Lubrication Approximation  

NASA Astrophysics Data System (ADS)

Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ? approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

Wilkening, Jon

2006-11-01

59

Systematic Biases in Human Heading Estimation  

PubMed Central

Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion. PMID:23457631

Cuturi, Luigi F.; MacNeilage, Paul R.

2013-01-01

60

Estimating Transport Errors for Inverse Analysis  

NASA Astrophysics Data System (ADS)

Within the next five years, Earth Networks will deploy and operate a network of Greenhouse Gas (GHG) measuring instruments installed at tall towers with a collocated weather station. Typical design has a CRDS (cavity ring-down spectrometer) sensor, collecting continuous observations of atmospheric carbon dioxide and methane mixing ratios at multiple heights, as well as a calibration unit to ensure that data meets international GHG-monitoring standards. In the US, Earth Networks also operates more than 8,000 professional grade surface weather stations, which provide measurements of more than 20 meteorological variables at high temporal resolution. Using Earth Networks' observations, we analyze how to account for imperfect representation of atmospheric winds in transport models for GHG footprint computations. Such footprints are used in inverse modeling to estimate natural and anthropogenic sources and sinks at regional and local scales. We discuss a setup where the atmospheric trajectories and surface footprints are computed using the STILT (Stochastic Time-Inverted Lagrangian Transport) model coupled to the WRF(Weather Research and Forecasting) providing transport fields at refined spatial and temporal resolution. Dispersion of particles in simulated trajectories is controlled by spatially varying parameters ?, which account for the transport error estimated from Earth Networks' surface observations as well as available data from NOAA (National Oceanic and Atmospheric Administration). Footprints generated using parameters averaged over different periods of time are compared. We discuss pros and cons of shorter averaging intervals.

Novakovskaia, E.

2011-12-01

61

A study of systematic errors in the PMD CamBoard nano  

NASA Astrophysics Data System (ADS)

Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

Chow, Jacky C. K.; Lichti, Derek D.

2013-04-01

62

Estimation of Radar-Rainfall Error Spatial Correlation  

Microsoft Academic Search

The authors present a study of a theoretical framework to estimate the radar-rainfall error spatial correlation using high density rain gauge networks and high quality data. The error is defined as the difference between the radar estimate and the true areal rainfall. Based on the framework of second-order rainfall field characterization, the authors propose a method for error spatial correlation

P. V. Mandapaka; W. F. Krajewski; G. Ciach; G. Villarini

2006-01-01

63

A posteriori pointwise error estimates for the boundary element method  

SciTech Connect

This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics

1995-01-01

64

Systematic vertical error in UAV-derived topographic models: Origins and solutions  

NASA Astrophysics Data System (ADS)

Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example, images collected during gently banked turns of a fixed-wing UAV or, if camera inclination can be altered, by just a few more oblique images with a rotor-based UAV. We provide practical flight plan solutions that, in the absence of control points, demonstrate a reduction in systematic DEM error by more than two orders of magnitude. DEM generation is subject to this effect whether a traditional photogrammetry or newer structure-from-motion (SfM) processing approach is used, but errors will be typically more pronounced in SfM-based DEMs, for which use of control measurements is often more limited. Although focussed on UAV surveying, our results are also relevant to ground-based image capture for SfM-based modelling.

James, Mike R.; Robson, Stuart

2014-05-01

65

Systematic Errors in GNSS Radio Occultation Data - Part 2  

NASA Astrophysics Data System (ADS)

The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows to compute sensitivities to changes in atmospheric composition, where we found that the effect of the CO2 increase is currently almost exactly balanced by the counteracting effect of the concurrent O2 decrease.

Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

2014-05-01

66

Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)  

NASA Technical Reports Server (NTRS)

A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

Adler, Robert; Gu, Guojun; Huffman, George

2012-01-01

67

Reducing Model Systematic Error through Super Modelling - The Tropical Pacific  

NASA Astrophysics Data System (ADS)

Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. The general approach to tackle uncertainty is to use an ensemble of several different GCMs. However, ensemble results may smear out major variability, such as the ENSO. Here we take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differ only in their convection scheme. As climate models show large sensitivity to convection schemes, this approach may be a good basis for constructing a super model. We performed experiments with a machine learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Furthermore, the model with optimal coefficients has not only good performance over the surface temperature and precipitation, but also the positive Bjerknes feedback and the negative heat flux feedback match observations/reanalysis well, leading to a substantially improved simulation of ENSO.

Shen, M.; Keenlyside, N. S.; Selten, F.; Wiegerinck, W.; Duane, G. S.

2013-12-01

68

CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes  

NASA Technical Reports Server (NTRS)

Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

2012-01-01

69

A Note on Confidence Interval Estimation and Margin of Error  

ERIC Educational Resources Information Center

Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

Gilliland, Dennis; Melfi, Vince

2010-01-01

70

Novel Detection Algorithm of IDMA System under Channel Estimation Error  

E-print Network

Novel Detection Algorithm of IDMA System under Channel Estimation Error Chulhee Jang, Hyunwoo Choi, and Jae Hong Lee School of Electrical Engineering and INMC Seoul National University Seoul, Korea Email for the elementary signal estimator of an IDMA system considering channel estimation error. To develop the algorithm

Lee, Jae Hong

71

Original article Measurement error in the estimation of intake  

E-print Network

-animal) variance component of the grazing tri- als. intake / patch / estimation / cattle Résumé — Erreur dansOriginal article Measurement error in the estimation of intake from herbage patches by non focused on the error introduced by the indirect estimation of herbage intake from a patch. In each of two

Paris-Sud XI, Université de

72

Application of Bayesian Systematic Error Correction to Kepler Photometry  

NASA Astrophysics Data System (ADS)

In a companion talk (Jenkins et al.), we present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of intrinsically quiet and highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. In this poster, we illustrate the concept in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through June 2010. We define the correlation function between normalized, mean-removed light curves and select a subset of highly correlated stars. This ensemble of light curves can then be combined with ancillary engineering data and image motion polynomials to form a design matrix from which the principal components are extracted by reduced-rank SVD decomposition. MAP is then represented in the resulting orthonormal basis, and applied to the set of all light curves. We show that the correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. We then show the benefits of MAP applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability. After high-pass filtering the MAP output, we show that MAP does not increase noise on transit time scales, compared to LS. We conclude with a discussion of current work selecting input vectors for the design matrix, representing and numerically solving MAP for non-Gaussian probability distribution functions (PDFs), and suppressing high-frequency noise injection with Lagrange multipliers. Funding for this mission is provided by NASA, Science Mission Directorate.

Van Cleve, Jeffrey E.; Jenkins, J. M.; Twicken, J. D.; Smith, J. C.; Fanelli, M. N.

2011-01-01

73

Estimating IMU heading error from SAR images.  

SciTech Connect

Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

Doerry, Armin Walter

2009-03-01

74

Chip-level soft error estimation method  

Microsoft Academic Search

This paper gives a review of considerations necessary for the prediction of soft error rates (SERs) for microprocessor designs. It summarizes the physics and silicon process dependencies of soft error mechanisms and describes the determination of SERs for basic circuit types. It reviews the impact of logical and architectural filtering on SER calculations and focuses on the structural filtering of

Hang T. Nguyen; Yoad Yagil; Norbert Seifert; Mike Reitsma

2005-01-01

75

An analysis of the least-squares problem for the DSN systematic pointing error model  

NASA Technical Reports Server (NTRS)

A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

Alvarez, L. S.

1991-01-01

76

Estimation of model error variances during data assimilation  

NASA Astrophysics Data System (ADS)

Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data assimilation system.

Dee, D.

2003-04-01

77

Estimation of Model Error Variances During Data Assimilation  

NASA Technical Reports Server (NTRS)

Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data assimilation system.

Dee, Dick

2003-01-01

78

Probabilistic state estimation in regimes of nonlinear error growth  

E-print Network

State estimation, or data assimilation as it is often called, is a key component of numerical weather prediction (NWP). Nearly all implementable methods of state estimation suitable for NWP are forced to assume that errors ...

Lawson, W. Gregory, 1975-

2005-01-01

79

NETRA: Interactive Display for Estimating Refractive Errors and Focal Range  

E-print Network

We introduce an interactive, portable, and inexpensive solution for estimating refractive errors in the human eye. While expensive optical devices for automatic estimation of refractive correction exist, our goal is to ...

Pamplona, Vitor F.

80

Implicit a posteriori error estimates for the Maxwell equations  

NASA Astrophysics Data System (ADS)

An implicit a posteriori error estimation technique is presented and analyzed for the numerical solution of the time-harmonic Maxwell equations using Nedelec edge elements. For this purpose we define a weak formulation for the error on each element and provide an efficient and accurate numerical solution technique to solve the error equations locally. We investigate the well-posedness of the error equations and also consider the related eigenvalue problem for cubic elements. Numerical results for both smooth and non-smooth problems, including a problem with reentrant corners, show that an accurate prediction is obtained for the local error, and in particular the error distribution, which provides essential information to control an adaptation process. The error estimation technique is also compared with existing methods and provides significantly sharper estimates for a number of reported test cases.

Izsak, Ferenc; Harutyunyan, Davit; van der Vegt, Jaap J. W.

2008-09-01

81

A Systematic Review of Software Development Cost Estimation Studies  

E-print Network

1 A Systematic Review of Software Development Cost Estimation Studies Magne Jørgensen, Simula for the improvement of software estimation research through a systematic review of previous work. The review identifies 304 software cost estimation papers in 76 journals and classifies the papers according to research

82

A POSTERIORI FINITE ELEMENT ERROR ESTIMATION FOR DIFFUSION PROBLEM  

E-print Network

A POSTERIORI FINITE ELEMENT ERROR ESTIMATION FOR DIFFUSION PROBLEM SLIMANE ADJERIDy, BELKACEM quadrilateral-element meshes, problems with singularities, and nonlinear problems. Key words. Finite element. A posteriori error estimates are a standard ingredient of adap- tive nite element software. They are used

Adjerid, Slimane

83

Fisher classifier and its probability of error estimation  

NASA Technical Reports Server (NTRS)

Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

Chittineni, C. B.

1979-01-01

84

A review on the impact of systematic safety processes for the control of error in medicine.  

PubMed

Among risk management initiatives, systematic safety processes (SSPs), implemented within health care organizations, could be useful in managing patient safety. The purpose of this article is to conduct a systematic literature review assessing the impact of SSPs on different error categories. Articles that investigated the relation between SSPs, clinical and organizational outcomes were selected from scientific literature. The proportion and impact of proactive and reactive SSPs were calculated among five error categories. Proactive interventions impacted more positively than reactive ones in reducing medication errors, technical errors and errors due to personnel. PSSPs and RSSPs had similar effects in reducing errors related to a wrong procedure. A single reactive study influenced non-positively communication errors. A relevant prevalence of the impact of proactive processes on reactive ones is reported. This article can help decision makers in identifying which SSP can be the most appropriate against specific error categories. PMID:19564841

Damiani, Gianfranco; Pinnarelli, Luigi; Scopelliti, Lucia; Sommella, Lorenzo; Ricciardi, Walter

2009-07-01

85

OPTIMAL ERROR EXPONENTS IN HIDDEN MARKOV MODELS ORDER ESTIMATION  

E-print Network

OPTIMAL ERROR EXPONENTS IN HIDDEN MARKOV MODELS ORDER ESTIMATION E. GASSIAT AND S. BOUCHERON alphabet Hidden Markov Model (hmm). The estimators we investigate are related to code­based order hmm order estimator, for most hmm, the over­estimation exponent is null. Hidden Markov Model, Order

Gassiat, Elisabeth

86

Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating.  

PubMed

The systematic error for photomechanic methods caused by self-heating induced image expansion when using a digital camera was systematically studied, and a new physical model to explain the mechanism has been proposed and verified. The experimental results showed that the thermal expansion of the camera outer case and lens mount, instead of mechanical components within the camera, were the main reason for image expansion. The corresponding systematic error for both image analysis and fringe analysis based photomechanic methods were analyzed and measured, then error compensation techniques were proposed and verified. PMID:23546150

Ma, Qinwei; Ma, Shaopeng

2013-03-25

87

Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect  

NASA Technical Reports Server (NTRS)

Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.

Sulkanen, Martin E.; Patel, Sandeep K.

1998-01-01

88

MPDATA error estimator for mesh adaptivity  

Microsoft Academic Search

In multidimensional positive definite advection transport algorithm (MPDATA) the leading error as well as the first- and second-order solutions are known explicitly by design. This property is employed to construct refinement indicators for mesh adaptivity. Recent progress with the edge-based formulation of MPDATA facilitates the use of the method in an unstructured-mesh environment. In particular, the edge-based data structure allows

Joanna Szmelter; Piotr K. Smolarkiewicz

2006-01-01

89

Effects of national forest inventory plot location error on forest carbon stock estimation using k-nearest neighbor algorithm  

NASA Astrophysics Data System (ADS)

This paper suggested simulation approaches for quantifying and reducing the effects of National Forest Inventory (NFI) plot location error on aboveground forest biomass and carbon stock estimation using the k-Nearest Neighbor (kNN) algorithm. Additionally, the effects of plot location error in pre-GPS and GPS NFI plots were compared. Two South Korean cities, Sejong and Daejeon, were chosen to represent the study area, for which four Landsat TM images were collected together with two NFI datasets established in both the pre-GPS and GPS eras. The effects of plot location error were investigated in two ways: systematic error simulation, and random error simulation. Systematic error simulation was conducted to determine the effect of plot location error due to mis-registration. All of the NFI plots were successively moved against the satellite image in 360° directions, and the systematic error patterns were analyzed on the basis of the changes of the Root Mean Square Error (RMSE) of kNN estimation. In the random error simulation, the inherent random location errors in NFI plots were quantified by Monte Carlo simulation. After removal of both the estimated systematic and random location errors from the NFI plots, the RMSE% were reduced by 11.7% and 17.7% for the two pre-GPS-era datasets, and by 5.5% and 8.0% for the two GPS-era datasets. The experimental results showed that the pre-GPS NFI plots were more subject to plot location error than were the GPS NFI plots. This study's findings demonstrate a potential remedy for reducing NFI plot location errors which may improve the accuracy of carbon stock estimation in a practical manner, particularly in the case of pre-GPS NFI data.

Jung, Jaehoon; Kim, Sangpil; Hong, Sungchul; Kim, Kyoungmin; Kim, Eunsook; Im, Jungho; Heo, Joon

2013-07-01

90

Bias in parameter estimation of form errors  

NASA Astrophysics Data System (ADS)

The surface form qualities of precision components are critical to their functionalities. In precision instruments algebraic fitting is usually adopted and the form deviations are assessed in the z direction only, in which case the deviations at steep regions of curved surfaces will be over-weighted, making the fitted results biased and unstable. In this paper the orthogonal distance fitting is performed for curved surfaces and the form errors are measured along the normal vectors of the fitted ideal surfaces. The relative bias of the form error parameters between the vertical assessment and orthogonal assessment are analytically calculated and it is represented as functions of the surface slopes. The parameter bias caused by the non-uniformity of data points can be corrected by weighting, i.e. each data is weighted by the 3D area of the Voronoi cell around the projection point on the fitted surface. Finally numerical experiments are given to compare different fitting methods and definitions of the form error parameters. The proposed definition is demonstrated to show great superiority in terms of stability and unbiasedness.

Zhang, Xiangchao; Zhang, Hao; He, Xiaoying; Xu, Min

2014-09-01

91

Estimates of Random Error in Satellite Rainfall Averages  

NASA Technical Reports Server (NTRS)

Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

Bell, Thomas L.; Kundu, Prasun K.

2003-01-01

92

Mapping random and systematic errors of satellite-derived snow water equivalent observations in Eurasia  

E-print Network

for the 1990-1991 snow season (November-April) have been examined. Dense vegetation, especially in the taiga are greatest, (in the taiga and tundra regions) are the major source of systematic error. Assumptions about how

Walker, Jeff

93

Nonparametric Item Response Curve Estimation with Correction for Measurement Error  

ERIC Educational Resources Information Center

Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

Guo, Hongwen; Sinharay, Sandip

2011-01-01

94

Bootstrapped DEPICT for error estimation in PET functional imaging  

Microsoft Academic Search

Basis pursuit denoising is a new approach for data-driven estimation of parametric images from dynamic positron emission tomography (PET) data. At present, this kinetic modeling technique does not allow for the estimation of the errors on the parameters. These estimates are useful when performing subsequent statistical analysis, such as, inference across a group of subjects or when applying partial volume

Sunil L. Kukreja; Roger N. Gunn

2004-01-01

95

Bootstrap Estimates of Standard Errors in Generalizability Theory  

ERIC Educational Resources Information Center

Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

Tong, Ye; Brennan, Robert L.

2007-01-01

96

Preliminary estimates of radiosonde thermistor errors  

NASA Technical Reports Server (NTRS)

Radiosonde temperature measurements are subject to errors, not the least of which is the effect of long- and short-wave radiation. Methods of adjusting the daytime temperatures to a nighttime equivalent are used by some analysis centers. Other than providing consistent observations for analysis this procedure does not provide a true correction. The literature discusses the problem of radiosonde temperature errors but it is not apparent what effort, if any, has been taken to quantify these errors. To accomplish the latter, radiosondes containing multiple thermistors with different coatings were flown at Goddard Space Flight Center/Wallops Flight Facility. The coatings employed had different spectral characteristics and, therefore, different adsorption and emissivity properties. Discrimination of the recorded temperatures enabled day and night correction values to be determined for the US standard white-coated rod thermistor. The correction magnitudes are given and a comparison of US measured temperatures before and after correction are compared with temperatures measured with the Vaisala radiosonde. The corrections are in the proper direction, day and night, and reduce day-night temperature differences to less than 0.5 C between surface and 30 hPa. The present uncorrected temperatures used with the Viz radiosonde have day-night differences that exceed 1 C at levels below 90 hPa. Additional measurements are planned to confirm these preliminary results and determine the solar elevation angle effect on the corrections. The technique used to obtain the corrections may also be used to recover a true absolute value and might be considered a valuable contribution to the meteorological community for use as a reference instrument.

Schmidlin, F. J.; Luers, J. K.; Huffman, P. D.

1986-01-01

97

A posteriori error estimate for the mixed finite element method  

Microsoft Academic Search

A computable error bound for mixed nite element methods is established in the model case of the Poisson{problem to control the error in the H(div,)L2(){norm. The reliable and ecient a posteriori error estimate applies, e.g., to Raviart{Thomas, Brezzi-Douglas-Marini, and Brezzi-Douglas- Fortin-Marini elements. 1. Mixed method for the Poisson problem Mixed nite element methods are well-established in the numerical treatment of

Carsten Carstensen

1997-01-01

98

Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget  

E-print Network

We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties on the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the mu...

Acquaviva, Viviana; Gawiser, Eric

2015-01-01

99

Estimation of scattering error in spectrophotometric measurements of light absorption  

E-print Network

Estimation of scattering error in spectrophotometric measurements of light absorption by aquatic scattering error in measurements of light absorption by aquatic particles with a typical laboratory double function of particles. We applied this method to absorption mea- surements made on marine phytoplankton

Stramski, Dariusz

100

A Note on Confidence Interval Estimation in Measurement Error Adjustment  

Microsoft Academic Search

The goal of adjusting for covariate measurement error is generally to obtain valid point and confidence interval estimates for the parameters of a regression model that would be applied if all covariates were error-free. In this article, we point out a potentially undesirable feature of many standard adjusted confidence intervals. Specifically, the coverage can be unbalanced, in the sense that

Robert H. Lyles; Lawrence L. Kupper

1999-01-01

101

Error magnitude estimation in model-reference adaptive systems  

NASA Technical Reports Server (NTRS)

A second order approximation is derived from a linearized error characteristic equation for Lyapunov designed model-reference adaptive systems and is used to estimate the maximum error between the model and plant states, and the time to reach this peak following a plant perturbation. The results are applicable in the analysis of plants containing magnitude-dependent nonlinearities.

Colburn, B. K.; Boland, J. S., III

1975-01-01

102

Color Estimation Error Trade-offs Ulrich Barnhfer*a  

E-print Network

Color Estimation Error Trade-offs Ulrich Barnhöfer*a , Jeffrey M. DiCarloa , Ben Oldingc and Brian be transformed to calibrated (human) color representations for display or print reproduction. Errors in these color rendering transformations can arise from a variety of sources, including (a) noise

Wandell, Brian A.

103

Estimation error of INS transfer alignment through observability analysis  

Microsoft Academic Search

Estimation errors of the transfer alignment of the inertial navigation system (INS) are presented in two cases, which are stationary state and constant velocity maneuver. The analyzed state variables include misalignment and inertial sensor errors. The INS transfer alignment model uses the velocity matching algorithm in this paper. In order to simplify the deduction process, the velocity matching model is

Yanling Hao; Zhilan XiongandBaiya Xiong; Baiya Xiong

2006-01-01

104

Energy norm a posteriori error estimation for discontinuous Galerkin methods  

Microsoft Academic Search

In this paper we present a residual-based a posteriori error estimate of a natural mesh dependent energy norm of the error in a family of discontinuous Galerkin approximations of elliptic problems. The theory is developed for an elliptic model problem in two and three spatial dimensions and general nonconvex polygonal domains are allowed. We also present some illustrating numerical examples.

Roland Becker; Peter Hansbo; Mats G. Larson

2003-01-01

105

Estimating the sources of motor errors for adaptation and generalization  

PubMed Central

Motor adaptation is usually defined as the process by which our nervous system produces accurate movements, while the properties of our bodies and our environment continuously change. Numerous experimental and theoretical studies have characterized this process by assuming that the nervous system uses internal models to compensate for motor errors. Here we extend these approaches and construct a probabilistic model that not only compensates for motor errors but estimates the sources of these errors. These estimates dictate how the nervous system should generalize. For example, estimated changes of limb properties will affect movements across the workspace but not movements with the other limb. We provide evidence that many movement generalization phenomena emerge from a strategy by which the nervous system estimates the sources of our motor errors. PMID:19011624

Berniker, Max; Kording, Konrad

2009-01-01

106

Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations  

PubMed Central

Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non?uniform across the studies. Dispensing and administering errors were the most poorly and non?uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non?evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758

Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J

2007-01-01

107

FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST  

E-print Network

FORWARD AND RETRANSMITTED SYSTEMATIC LOSSY ERROR PROTECTION FOR IPTV VIDEO MULTICAST Zhi Li1 Protection, error control 1. INTRODUCTION Advances in video and networking technologies have made and lightning strikes. Depending on the duration, impulse noise can be put into three categories, namely

Girod, Bernd

108

Systematic errors in weak lensing: application to SDSS galaxy-galaxy weak lensing  

Microsoft Academic Search

Weak lensing is emerging as a powerful observational tool to constrain cosmological models, but is at present limited by an incomplete understanding of many sources of systematic error. Many of these errors are multiplicative and depend on the population of background galaxies. We show how the commonly cited geometric test, which is rather insensitive to cosmology, can be used as

Rachel Mandelbaum; Christopher M. Hirata; Uros Seljak; Jacek Guzik; Nikhil Padmanabhan; Cullen Blake; Michael R. Blanton; Robert Lupton; Jonathan Brinkmann

2005-01-01

109

Difference image analysis: The interplay between the photometric scale factor and systematic photometric errors  

E-print Network

Context: Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA) and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.

Bramich, D M; Alsubai, K A; Mislis, D; Parley, N

2015-01-01

110

Using doppler radar images to estimate aircraft navigational heading error  

DOEpatents

A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

Doerry, Armin W. (Albuquerque, NM); Jordan, Jay D. (Albuquerque, NM); Kim, Theodore J. (Albuquerque, NM)

2012-07-03

111

Interval Computations Estimating Errors of Indirect  

E-print Network

Machines: Routings on 2-D and 3-D Meshes That are Nearly Optimal Elsa Villa, Andrew Bernat, and Vladik of y. An algorithm f that transforms the results ~xi into an estimate ~y for y is often extremely of this measurement is ±10, this means that there sure is sufficient oil in this area to make drilling economically

Kearfott, R. Baker

112

Stress Recovery and Error Estimation for 3-D Shell Structures  

NASA Technical Reports Server (NTRS)

The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

Riggs, H. R.

2000-01-01

113

DtaRefinery, a Software Tool for Elimination of Systematic Errors from Parent Ion Mass Measurements in Tandem Mass Spectra Data Sets*  

PubMed Central

Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and high throughput MS/MS fragmentation have become widely available in recent years, allowing for significantly better discrimination between true and false MS/MS peptide identifications by the application of a relatively narrow window for maximum allowable deviations of measured parent ion masses. To fully gain the advantage of highly accurate parent ion mass measurements, it is important to limit systematic mass measurement errors. Based on our previous studies of systematic biases in mass measurement errors, here, we have designed an algorithm and software tool that eliminates the systematic errors from the peptide ion masses in MS/MS data. We demonstrate that the elimination of the systematic mass measurement errors allows for the use of tighter criteria on the deviation of measured mass from theoretical monoisotopic peptide mass, resulting in a reduction of both false discovery and false negative rates of peptide identification. A software implementation of this algorithm called DtaRefinery reads a set of fragmentation spectra, searches for MS/MS peptide identifications using a FASTA file containing expected protein sequences, fits a regression model that can estimate systematic errors, and then corrects the parent ion mass entries by removing the estimated systematic error components. The output is a new file with fragmentation spectra with updated parent ion masses. The software is freely available. PMID:20019053

Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2010-01-01

114

Period Error Estimation for the Kepler Eclipsing Binary Catalog  

NASA Astrophysics Data System (ADS)

The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg2 Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log ? P ? - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods >=62.5 days have KEBC period errors of ~0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

Mighell, Kenneth J.; Plavchan, Peter

2013-06-01

115

PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG  

SciTech Connect

The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

Mighell, Kenneth J. [National Optical Astronomy Observatory, 950 North Cherry Avenue, Tucson, AZ 85719 (United States); Plavchan, Peter [NASA Exoplanet Science Institute, California Institute of Technology, Pasadena, CA 91125 (United States)

2013-06-15

116

An Empirical State Error Covariance Matrix for Batch State Estimation  

NASA Technical Reports Server (NTRS)

State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

Frisbee, Joseph H., Jr.

2011-01-01

117

COBE Differential Microwave Radiometers - Preliminary systematic error analysis  

NASA Technical Reports Server (NTRS)

The techniques available for the identification and subtraction of sources of dynamic uncertainty from data of the Differential Microwave Radiometer (DMR) instrument aboard COBE are discussed. Preliminary limits on the magnitude in the DMR 1 yr maps are presented. Residual uncertainties in the best DMR sky maps, after correcting the raw data for systematic effects, are less than 6 micro-K for the pixel rms variation, less than 3 micro-K for the rms quadruple amplitude of a spherical harmonic expansion, and less than 30 micro-(K-squared) for the correlation function.

Kogut, A.; Smoot, G. F.; Bennett, C. L.; Wright, E. L.; Aymon, J.; De Amici, G.; Hinshaw, G.; Jackson, P. D.; Kaita, E.; Keegstra, P.

1992-01-01

118

Estimated genotype error rates from bowhead whale microsatellite data  

Microsoft Academic Search

We calculate error rates using opportunistic replicate samples in the microsatellite data for bowhead whales. The estimated rate (1%\\/genotype) falls within normal ranges reviewed in this paper. The results of a jackknife analysis identified five individuals that were highly influential on estimates of Hardy-Weinberg equilibrium for four different markers. In each case, the influential individual was homozygous for a rare

Phillip A Morin; Richard G LeDuc; Eric Archer; Karen K Martien; Barbara L Taylor; Ryan Huebinger; John W. Bickham

119

Consistent covariance matrix estimation in probit models with autocorrelated errors  

Microsoft Academic Search

Some recent time-series applications use probit models to measure the forecasting power of a set of variables. Correct inferences about the significance of the variables requires a consistent estimator of the covariance matrix of the estimated model coefficients. A potential source of inconsistency in maximum likelihood standard errors is serial correlation in the underlying disturbances, which may arise, for example,

Arturo Estrella; Anthony P. Rodrigues

1998-01-01

120

Seqrential point estimation in regression models with nonnormal errors  

Microsoft Academic Search

Sequential point estimation nf thc parameters of a regression model using least squares estimates and subject to the loss function of Mukhopadhyay (1974) and Finster (1983), is considered. Asymptotic risk efficiency is obtained under the assumption that the errors have finite moments of order higher than two and under a mild condition on the design matrix. Second order expansions for

Adam T. Martinsek

1990-01-01

121

Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis  

ERIC Educational Resources Information Center

Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

Sass, Daniel A.

2010-01-01

122

Modeling Radar Rainfall Estimation Uncertainties: Random Error Model  

E-print Network

Modeling Radar Rainfall Estimation Uncertainties: Random Error Model A. AghaKouchak1 ; E. Habib2 ; and A. Bárdossy3 Abstract: Precipitation is a major input in hydrological models. Radar rainfall data of rainfall rates. Parameters of the model are estimated using the maximum likelihood method in order

AghaKouchak, Amir

123

Some A Posteriori Error Estimators for Elliptic Partial Differential Equations  

Microsoft Academic Search

. We present three new a posteriori error estimators in the energynorm for finite element solutions to elliptic partial differential equations. Theestimators are based on solving local Neumann problems in each element. Theestimators differ in how they enforce consistency of the Neumann problems.We prove that as the mesh size decreases, under suitable assumptions, two ofthe estimators approach upper bounds on

Randolph E. Bank; Alan Weiser

1985-01-01

124

Efficient Semiparametric Estimators for Biological, Genetic, and Measurement Error Applications  

E-print Network

)? and different sample sizes n. The con- ditional correlation between T and W given Z is 0.15 (? = 0.4) and 0.27 (? = 0.7). Bias, sample standard error (se), median of estimated standard errors (seˆ), and the 95% coverage probability (CI) are given... for unadjusted estimator ˆ? and adjusted estimator ˆ?AUG, respectively. . . . 38 5 Simulation with ? = (0.5, ?0.5)? and different sample sizes n. The conditional correlation between T and W given Z is 0.11 (? = 0.4) and 0.19 (? = 0.7). Bias, sample standard...

Garcia, Tanya

2012-10-19

125

Sensitivity analysis of DOA estimation algorithms to sensor errors  

NASA Astrophysics Data System (ADS)

A unified statistical performance analysis using subspace perturbation expansions is applied to subspace-based algorithms for direction-of-arrival (DOA) estimation in the presence of sensor errors. In particular, the multiple signal classification (MUSIC), min-norm, state-space realization (TAM and DDA) and estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithms are analyzed. This analysis assumes that only a finite amount of data is available. An analytical expression for the mean-squared error of the DOA estimates is developed for theoretical comparison in a simple and self-contained fashion. The tractable formulas provide insight into the algorithms. Simulation results verify the analysis.

Li, Fu; Vaccaro, Richard J.

1992-07-01

126

Adaptive Error Estimation in Linearized Ocean General Circulation Models  

NASA Technical Reports Server (NTRS)

Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

Chechelnitsky, Michael Y.

1999-01-01

127

Estimating model error covariance matrix parameters in extended Kalman filtering  

NASA Astrophysics Data System (ADS)

The extended Kalman filter (EKF) is a popular state estimation method for nonlinear dynamical models. The model error covariance matrix is often seen as a tuning parameter in EKF, which is often simply postulated by the user. In this paper, we study the filter likelihood technique for estimating the parameters of the model error covariance matrix. The approach is based on computing the likelihood of the covariance matrix parameters using the filtering output. We show that (a) the importance of the model error covariance matrix calibration depends on the quality of the observations, and that (b) the estimation approach yields a well-tuned EKF in terms of the accuracy of the state estimates and model predictions. For our numerical experiments, we use the two-layer quasi-geostrophic model that is often used as a benchmark model for numerical weather prediction.

Solonen, A.; Hakkarainen, J.; Ilin, A.; Abbas, M.; Bibov, A.

2014-09-01

128

Analysis of systematic error in “bead method” measurements of meteorite bulk volume and density  

NASA Astrophysics Data System (ADS)

The Archimedean glass bead method for determining meteorite bulk density has become widely applied. We used well characterized, zero-porosity quartz and topaz samples to determine the systematic error in the glass bead method to support bulk density measurements of meteorites for our ongoing meteorite survey. Systematic error varies according to bead size, container size and settling method, but in all cases is less than 3%, and generally less than 2%. While measurements using larger containers (above 150 cm 3) exhibit no discernible systematic error but much reduced precision, higher precision measurements with smaller containers do exhibit systematic error. For a 77 cm 3 container using 40-80 ?m diameter beads, the systematic error is effectively eliminated within measurement uncertainties when a "secured shake" settling method is employed in which the container is held securely to the shake platform during a 5 s period of vigorous shaking. For larger 700-800 ?m diameter beads using the same method, bulk volumes are uniformly overestimated by 2%. Other settling methods exhibit sample-volume-dependent biases. For all methods, reliability of measurement is severely reduced for samples below ˜5 cm 3 (10-15 g for typical meteorites), providing a lower-limit selection criterion for measurement of meteoritical samples.

Macke S. J., Robert J.; Britt, Daniel T.; Consolmagno S. J., Guy J.

2010-02-01

129

Impact of radar systematic error on the orthogonal frequency division multiplexing chirp waveform orthogonality  

NASA Astrophysics Data System (ADS)

Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.

Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao

2015-01-01

130

Transfer Alignment Error Compensator Design Based on Robust State Estimation  

NASA Astrophysics Data System (ADS)

This paper examines the transfer alignment problem of the StrapDown Inertial Navigation System (SDINS), which is subject to the ship’s roll and pitch. Major error sources for velocity and attitude matching are lever arm effect, measurement time delay and ship-body flexure. To reduce these alignment errors, an error compensation method based on state augmentation and robust state estimation is devised. A linearized error model for the velocity and attitude matching transfer alignment system is derived first by linearizing the nonlinear measurement equation with respect to its time delay and dominant Y-axis flexure, and by augmenting the delay state and flexure state into conventional linear state equations. Then an H? filter is introduced to account for modeling uncertainties of time delay and the ship-body flexure. The simulation results show that this method considerably decreases azimuth alignment errors considerably.

Lyou, Joon; Lim, You-Chol

131

A posteriori error estimates for boundary element methods  

Microsoft Academic Search

: This paper presents a posteriori error estimates for the hp--versionof the boundary element method. We discuss two first kind integral operator equations,namely Symm's integral equation and the integral equation with a hypersingularoperator. The computable upper error bounds indicate an algorithm for the automatichp--adaptive mesh--refinement. The efficiency of this method is shown by numerical experimentsyielding almost optimal convergence even in

Carsten Carstensen; Ernst P. Stephan

1995-01-01

132

Geodynamo model and error parameter estimation using geomagnetic data assimilation  

NASA Astrophysics Data System (ADS)

We have developed a new geomagnetic data assimilation approach which uses the minimum variance' estimate for the analysis state, and which models both the forecast (or model output) and observation errors using an empirical approach and parameter tuning. This system is used in a series of assimilation experiments using Gauss coefficients (hereafter referred to as observational data) from the GUFM1 and CM4 field models for the years 1590-1990. We show that this assimilation system could be used to improve our knowledge of model parameters, model errors and the dynamical consistency of observation errors, by comparing forecasts of the magnetic field with the observations every 20 yr. Statistics of differences between observation and forecast (O - F) are used to determine how forecast accuracy depends on the Rayleigh number, forecast error correlation length scale and an observation error scale factor. Experiments have been carried out which demonstrate that a Rayleigh number of 30 times the critical Rayleigh number produces better geomagnetic forecasts than lower values, with an Ekman number of E = 1.25 × 10-6, which produces a modified magnetic Reynolds number within the parameter domain with an `Earth like' geodynamo. The optimal forecast error correlation length scale is found to be around 90 per cent of the thickness of the outer core, indicating a significant bias in the forecasts. Geomagnetic forecasts are also found to be highly sensitive to estimates of modelled observation errors: Errors that are too small do not lead to the gradual reduction in forecast error with time that is generally expected in a data assimilation system while observation errors that are too large lead to model divergence. Finally, we show that assimilation of L ? 3 (or large scale) gauss coefficients can help to improve forecasts of the L > 5 (smaller scale) coefficients, and that these improvements are the result of corrections to the velocity field in the geodynamo model.

Tangborn, Andrew; Kuang, Weijia

2015-01-01

133

Verification of unfold error estimates in the unfold operator code  

SciTech Connect

Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

Fehl, D.L.; Biggs, F. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)] [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

1997-01-01

134

Verification of unfold error estimates in the unfold operator code  

NASA Astrophysics Data System (ADS)

Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.

Fehl, D. L.; Biggs, F.

1997-01-01

135

ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction  

SciTech Connect

The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed.

Wu Yan; Shannon, Mark A. [Department of Mechanical and Industrial Engineering, University of Illinois at Urbana-Champaign, 1206 West Green Street, Urbana, Illinois 61801 (United States)

2006-04-15

136

The impact of orbital errors on the estimation of satellite clock errors and PPP  

NASA Astrophysics Data System (ADS)

Precise satellite orbit and clocks are essential for providing high accuracy real-time PPP (Precise Point Positioning) service. However, by treating the predicted orbits as fixed, the orbital errors may be partially assimilated by the estimated satellite clock and hence impact the positioning solutions. This paper presents the impact analysis of errors in radial and tangential orbital components on the estimation of satellite clocks and PPP through theoretical study and experimental evaluation. The relationship between the compensation of the orbital errors by the satellite clocks and the satellite-station geometry is discussed in details. Based on the satellite clocks estimated with regional station networks of different sizes (?100, ?300, ?500 and ?700 km in radius), results indicated that the orbital errors compensated by the satellite clock estimates reduce as the size of the network increases. An interesting regional PPP mode based on the broadcast ephemeris and the corresponding estimated satellite clocks is proposed and evaluated through the numerical study. The impact of orbital errors in the broadcast ephemeris has shown to be negligible for PPP users in a regional network of a radius of ?300 km, with positioning RMS of about 1.4, 1.4 and 3.7 cm for east, north and up component in the post-mission kinematic mode, comparable with 1.3, 1.3 and 3.6 cm using the precise orbits and the corresponding estimated clocks. Compared with the DGPS and RTK positioning, only the estimated satellite clocks are needed to be disseminated to PPP users for this approach. It can significantly alleviate the communication burdens and therefore can be beneficial to the real time applications.

Lou, Yidong; Zhang, Weixing; Wang, Charles; Yao, Xiuguang; Shi, Chuang; Liu, Jingnan

2014-10-01

137

Variance estimation for systematic designs in spatial surveys.  

PubMed

In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena?(Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940

Fewster, R M

2011-12-01

138

Error Estimation for the Linearized Auto-Localization Algorithm  

PubMed Central

The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

2012-01-01

139

Error estimation for the linearized auto-localization algorithm.  

PubMed

The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

2012-01-01

140

A precise error bound for quantum phase estimation.  

PubMed

Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers. PMID:21573006

Chappell, James M; Lohe, Max A; von Smekal, Lorenz; Iqbal, Azhar; Abbott, Derek

2011-01-01

141

A novel approach to an old problem: analysis of systematic errors in two models of recognition memory.  

PubMed

For more than a decade, the high threshold dual process (HTDP) model has served as a guide for studying the functional neuroanatomy of recognition memory. The HTDP model's utility has been that it provides quantitative estimates of recollection and familiarity, two processes thought to support recognition ability. Important support for the model has been the observation that it fits experimental data well. The continuous dual process (CDP) model also fits experimental data well. However, this model does not provide quantitative estimates of recollection and familiarity, making it less immediately useful for illuminating the functional neuroanatomy of recognition memory. These two models are incompatible and cannot both be correct, and an alternative method of model comparison is needed. We tested for systematic errors in each model's ability to fit recognition memory data from four independent data sets from three different laboratories. Across participants and across data sets, the HTDP model (but not the CDP model) exhibited systematic error. In addition, the pattern of errors exhibited by the HTDP model was predicted by the CDP model. We conclude that the CDP model provides a better account of recognition memory than the HTDP model. PMID:24184486

Dede, Adam J O; Squire, Larry R; Wixted, John T

2014-01-01

142

Error Minimizing Jammer Localization Through Smart Estimation of Ambient Noise  

E-print Network

Error Minimizing Jammer Localization Through Smart Estimation of Ambient Noise Zhenhua liu, Hongbo the problem of localizing jammer. Prior work relies on indirect measurements derived from jamming effects with different communication technologies that share the same spectrum, e.g., cordless phones, Wi-Fi network

Xu, Wenyuan

143

Note: Statistical errors estimation for Thomson scattering diagnostics  

SciTech Connect

A practical way of estimating statistical errors of a Thomson scattering diagnostic measuring plasma electron temperature and density is described. Analytically derived expressions are successfully tested with Monte Carlo simulations and implemented in an automatic data processing code of the JET LIDAR diagnostic.

Maslov, M.; Beurskens, M. N. A.; Flanagan, J.; Kempenaars, M. [EURATOM-CCFE Fusion Association, Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Collaboration: JET-EFDA Contributors

2012-09-15

144

Condition and Error Estimates in Numerical Matrix Computations  

SciTech Connect

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

Konstantinov, M. M. [University of Architecture, Civil Engineering and Geodesy, 1046 Sofia (Bulgaria); Petkov, P. H. [Technical University of Sofia, 1000 Sofia (Bulgaria)

2008-10-30

145

ERROR ESTIMATES IN LEARNING THEORY WITH EFFECTIVE DIMENSIONALITY  

E-print Network

N, drawn independently according to . Our task is to construct functions fz, using the finite sample z, such that (4) lim m ||fz - f||2 = 0 with high probability. 1 #12;Algorithm 1. Let K : X Ã? X R (5) fz, = arg min fHK 1 m m i=1 (f(xi ) - yi )2 + ||f||2 K . 2. Error Estimates with Effective

Minh, Ha Quang

146

Estimating Filtering Errors Using the Peano Kernel Theorem  

SciTech Connect

The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

Jerome Blair

2009-02-20

147

Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations  

Microsoft Academic Search

Background: Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children.Objective: To synthesise peer reviewed knowledge on children’s medication errors and on recommendations to improve paediatric medication safety by a systematic

Marlene R Miller; Karen A Robinson; Lisa H Lubomski; Michael L Rinke; Peter J Pronovost

2007-01-01

148

Analysis of systematic errors in calibrating glass electrodes with H + as a concentration probe  

Microsoft Academic Search

Two different experimental methods for the calibration of glass electrodes for H+ concentration are analysed: strong acid—strong base titration and addition of the strong base or acid to the solvent. Possible systematic errors in acid or base concentrations used for calibration affect in a different way the linear relationship between the cell potential and the logarithm of the hydrogen ion

S. Fiol; F. Arce; X. L. Armesto; F. Penedo; M. Sastre de Vicente

1992-01-01

149

Barcode Medication Administration System (BCMA) Errors A Systematic Review Rupa Mitra1  

E-print Network

Barcode Medication Administration System (BCMA) Errors ­ A Systematic Review Rupa Mitra1 1 IU School of Informatics Implementation of Barcode Medication Administration (BCMA) improves the accuracy of administration of medication. These systems can improve medication safety by ensuring that correct medication

Zhou, Yaoqi

150

Systematic Errors in Global Radiosonde Precipitable Water Data from Comparisons with Ground-Based GPS Measurements  

E-print Network

) was produced from ground-based global positioning system (GPS) measurements of zenith tropospheric delay (ZTD) at approximately 350 International Global Navigation Satellite Systems (GNSS) Service (IGS) ground stationsSystematic Errors in Global Radiosonde Precipitable Water Data from Comparisons with Ground

Wang, Junhong

151

Gross error detection and stage efficiency estimation in a separation process  

SciTech Connect

Accurate process models are required for optimization and control in chemical plants and petroleum refineries. These models involve various equipment parameters, such as stage efficiencies in distillation columns, the values of which must be determined by fitting the models to process data. Since the data contain random and systematic measurement errors, some of which may be large (gross errors), they must be reconciled to obtain reliable estimates of equipment parameters. The problem thus involves parameter estimation coupled with gross error detection and data reconciliation. MacDonald and Howat (1988) studied the above problem for a single-stage flash distillation process. Their analysis was based on the definition of stage efficiency due to Hausen, which has some significant disadvantages in this context, as discussed below. In addition, they considered only data sets which contained no gross errors. The purpose of this article is to extend the above work by considering alternative definitions of state efficiency and efficiency estimation in the presence of gross errors.

Serth, R.W.; Srikanth, B. (Texas A and M Univ., Kingsville, TX (United States). Dept. of Chemical and Natural Gas Engineering); Maronga, S.J. (Univ. of Dar-Es-Salaam (Tanzania, United Republic of). Dept. of Chemical and Process Engineering)

1993-10-01

152

A Systematic Approach for Model-Based Aircraft Engine Performance Estimation  

NASA Technical Reports Server (NTRS)

A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.

Simon, Donald L.; Garg, Sanjay

2010-01-01

153

DEB: definite error bounded tangent estimator for digital curves.  

PubMed

We propose a simple and fast method for tangent estimation of digital curves. This geometric-based method uses a small local region for tangent estimation and has a definite upper bound error for continuous as well as digital conics, i.e., circles, ellipses, parabolas, and hyperbolas. Explicit expressions of the upper bounds for continuous and digitized curves are derived, which can also be applied to nonconic curves. Our approach is benchmarked against 72 contemporary tangent estimation methods and demonstrates good performance for conic, nonconic, and noisy curves. In addition, we demonstrate a good multigrid and isotropic performance and low computational complexity of O(1) and better performance than most methods in terms of maximum and average errors in tangent computation for a large variety of digital curves. PMID:25122569

Prasad, Dilip K; Leung, Maylor K H; Quek, Chai; Brown, Michael S

2014-10-01

154

Discretization error estimation and exact solution generation using the method of nearby problems.  

SciTech Connect

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)

2011-10-01

155

Error estimates and specification parameters for functional renormalization  

SciTech Connect

We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)

2013-07-15

156

An Anisotropic A posteriori Error Estimator for CFD  

NASA Astrophysics Data System (ADS)

In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

157

Improved bit error rate estimation over experimental optical wireless channels  

NASA Astrophysics Data System (ADS)

As a part of the EU-FP7 R&D programme, the OMEGA project (hOME Gigabit Access) aims at bridging the gap between wireless terminals and wired backbone network in homes, providing high bit rate connectivity to users. Beside radio frequencies, the wireless links will use Optical Wireless (OW) communications. To guarantee high performance and quality of service in real-time, our system needs techniques to approximate the Bit Error Probability (BEP) with a reasonable training sequence. Traditionally, the BEP is approximated by the Bit Error Rate (BER) measured by counting the number of errors within a given sequence of bits. For small BERs, required sequences are huge and may prevent real-time estimation. In this paper, methods to estimate BER using Probability Density Function (PDF) estimation are presented. Two a posteriori techniques based on Parzen estimator or constrained Gram-Charlier series expansion are adapted and applied to OW communications. Aided by simulations, comparison is done over experimental optical channels. We show that, for different scenarios, such as optical multipath distortion or a well designed Code Division Multiple Access (CDMA) system, this approach outperforms the counting method and yields to better results with a relatively small training sequence.

El Tabach, Mamdouh; Saoudi, Samir; Tortelier, Patrick; Bouchet, Olivier; Pyndiah, Ramesh

2009-02-01

158

A case study of binary outcome data extraction across three systematic reviews of hip arthroplasty: errors and differences of selection  

PubMed Central

Background Data extraction is a key stage in systematic review, yet it is the subject of little research. The aim of the present research was to use a small case study to highlight some important issues affecting this fundamental process. Methods The authors undertook an analysis of differences in the binary event data extracted and analysed by three systematic reviews on the same topic: a comparison of total hip arthroplasty and hemiarthroplasty. The following binary event data were extracted for three key outcomes, common to all three reviews, from those trials common to all three reviews: Dislocation rates, 1-year mortality, and revision rates. Differences between the data extracted by the three reviews were categorised as either errors or an issue of data selection. Meta-analysis was performed to assess whether these differences led to differences in summary estimates of effect. Results Across the three outcomes, differences in selection accounted for between 8% and 42% of the data differences between reviews, and errors accounted for between 8% and 17%. No rationale was given in any of these former cases for the choice of event data being reported. These differences did lead to small differences in meta-analysed relative risks between the two treatments in the three reviews, but none was significant. Conclusions Systematic reviewers should use double-data extraction to minimise error and also make every effort to clarify or explain their choice of data, within the scope of their publication. Reviewers frequently exercise selection when faced with a choice of alternative but potentially equally appropriate data for an outcome. However, this selection is rarely made clear by review authors. Systematic review was developed as a method specifically to be both reproducible and transparent. This case study suggests that neither objective is always being achieved. PMID:24344873

2013-01-01

159

Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics  

NASA Technical Reports Server (NTRS)

Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

2002-01-01

160

DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets  

SciTech Connect

Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2009-12-16

161

Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation  

NASA Technical Reports Server (NTRS)

Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

2013-01-01

162

Divergent estimation error in portfolio optimization and in linear regression  

NASA Astrophysics Data System (ADS)

The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

Kondor, I.; Varga-Haszonits, I.

2008-08-01

163

Analysis and reduction of tropical systematic errors through a unified modelling strategy  

NASA Astrophysics Data System (ADS)

Systematic errors in climate models are usually addressed in a number of ways, but current methods often make use of model climatological fields as a starting point for model modification. This approach has limitations due to non-linear feedback mechanisms which occur over longer timescales and make the source of the errors difficult to identify. In a unified modelling environment, short-range (1-5 day) weather forecasts are readily available from NWP models with very similar dynamical and physical formulations to the climate models, but often increased horizontal (and vertical) resolution. Where such forecasts exhibit similar systematic errors to their climate model counterparts, there is much to be gained from combined analysis and sensitivity testing. For example, the Met Office Hadley Centre climate model HadGEM1 (Johns et al 2007) exhibits precipitation errors in the Asian summer monsoon, with too little rainfall over the Indian peninsula and too much over the equatorial Indian Ocean to the southwest of the peninsula (Martin et al., 2004). Examination of the development of precipitation errors in the Asian summer monsoon region in Met Office NWP forecasts shows that different parts of the error pattern evolve on different timescales. Excessive rainfall over the equatorial Indian Ocean to the southwest of the Indian peninsula develops rapidly, over the first day or two of the forecast, while a dry bias over the Indian land area takes ~10 days to develop. Such information is invaluable for understanding the processes involved and how to tackle them. Other examples of the use of this approach will be discussed, including analysis of the sensitivity of the representation of the Madden-Julian Oscillation (MJO) to the convective parametrisation, and the reduction of systematic tropical temperature and moisture biases in both climate and NWP models through improved representation of convective detrainment.

Copsey, D.; Marshall, A.; Martin, G.; Milton, S.; Senior, C.; Sellar, A.; Shelly, A.

2009-04-01

164

Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons  

PubMed Central

In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach ?. PMID:24027379

Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

2012-01-01

165

What is the scale of prescribing errors committed by junior doctors? A systematic review  

PubMed Central

AIMS Prescribing errors are an important cause of patient safety incidents, generally considered to be made more frequently by junior doctors, but prevalence and causality are unclear. In order to inform the design of an educational intervention, a systematic review of the literature on prescribing errors made by junior doctors was undertaken. METHODS Searches were undertaken using the following databases: MEDLINE; EMBASE; Science and Social Sciences Citation Index; CINAHL; Health Management Information Consortium; PsychINFO; ISI Proceedings; The Proceedings of the British Pharmacological Society; Cochrane Library; National Research Register; Current Controlled Trials; and Index to Theses. Studies were selected if they reported prescribing errors committed by junior doctors in primary or secondary care, were in English, published since 1990 and undertaken in Western Europe, North America or Australasia. RESULTS Twenty-four studies meeting the inclusion criteria were identified. The range of error rates was 2–514 per 1000 items prescribed and 4.2–82% of patients or charts reviewed. Considerable variation was seen in design, methods, error definitions and error rates reported. CONCLUSIONS The review reveals a widespread problem that does not appear to be associated with different training models, healthcare systems or infrastructure. There was a range of designs, methods, error definitions and error rates, making meaningful conclusions difficult. No definitive study of prescribing errors has yet been conducted, and is urgently needed to provide reliable baseline data for interventions aimed at reducing errors. It is vital that future research is well constructed and generalizable using standard definitions and methods. PMID:19094162

Ross, Sarah; Bond, Christine; Rothnie, Helen; Thomas, Sian; Macleod, Mary Joan

2009-01-01

166

Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation  

SciTech Connect

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

Beckerman, M.; Oblow, E.M.

1988-04-01

167

Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors  

PubMed Central

Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

2014-01-01

168

Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.  

PubMed

Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

2014-01-01

169

A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors  

NASA Technical Reports Server (NTRS)

The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

Larson, T. J.; Ehernberger, L. J.

1985-01-01

170

Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors  

NASA Technical Reports Server (NTRS)

Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.

Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

2007-01-01

171

Systematic Steps to Diminish Multi-Fold Medication Errors in Neonates  

PubMed Central

Tenfold and other multiple-of-dose errors are particularly common in the neonatal intensive care unit (NICU), where the fragility of the patients increases the potential for significant adverse outcomes. Such errors can originate at any of the sequential phases of the process, from medication ordering to administration. Each step of calculation, prescription writing, transcription, dose preparation, and administration is an opportunity for generating and preventing medication errors. A few simple principles and practical tips aimed at avoiding decimal and other multiple-dosing errors can be systematically implemented through the various steps of the process. The authors describe their experience with the implementation of techniques for error reduction in a NICU setting. The techniques described herein rely on simple, inexpensive technologies for information and automation, and on standardization and simplification of processes. They can be immediately adapted and applied in virtually any NICU and could be integrated into the development of computerized order entry systems appropriate to NICU settings. Either way, they should decrease the likelihood of undetected human error. PMID:23118682

Pinheiro, Joaquim M. B.; Mitchell, Amy L.; Lesar, Timothy S.

2003-01-01

172

Simultaneous Estimation of Photometric Redshifts and SED Parameters: Improved Techniques and a Realistic Error Budget  

NASA Astrophysics Data System (ADS)

We present the results of recent work seeking to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we show that if the uncertainties on the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters will be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We show that template incompleteness, a major cause of inaccuracy in this process, is ``flagged" by a large fraction of outliers in redshift and that it can be corrected by using more flexible stellar population models. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multidimensional probability distribution function in SED fitting + z parameter space, including all correlations.

Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric J.

2015-01-01

173

Local error estimates for discontinuous solutions of nonlinear hyperbolic equations  

NASA Technical Reports Server (NTRS)

Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

Tadmor, Eitan

1989-01-01

174

Systematic error in mechanical measures of damage during four-point bending fatigue of cortical bone.  

PubMed

Accumulation of fatigue microdamage in cortical bone specimens is commonly measured by a modulus or stiffness degradation after normalizing tissue heterogeneity by the initial modulus or stiffness of each specimen measured during a preloading step. In the first experiment, the initial specimen modulus defined using linear elastic beam theory (LEBT) was shown to be nonlinearly dependent on the preload level, which subsequently caused systematic error in the amount and rate of damage accumulation measured by the LEBT modulus degradation. Therefore, the secant modulus is recommended for measurements of the initial specimen modulus during preloading. In the second experiment, different measures of mechanical degradation were directly compared and shown to result in widely varying estimates of damage accumulation during fatigue. After loading to 400,000 cycles, the normalized LEBT modulus decreased by 26% and the creep strain ratio decreased by 58%, but the normalized secant modulus experienced no degradation and histology revealed no significant differences in microcrack density. The LEBT modulus was shown to include the combined effect of both elastic (recovered) and creep (accumulated) strain. Therefore, at minimum, both the secant modulus and creep should be measured throughout a test to most accurately indicate damage accumulation and account for different damage mechanisms. Histology revealed indentation of tissue adjacent to roller supports, with significant sub-surface damage beneath large indentations, accounting for 22% of the creep strain on average. The indentation of roller supports resulted in inflated measures of the LEBT modulus degradation and creep. The results of this study suggest that investigations of fatigue microdamage in cortical bone should avoid the use of four-point bending unless no other option is possible. PMID:19394019

Landrigan, Matthew D; Roeder, Ryan K

2009-06-19

175

Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters  

ERIC Educational Resources Information Center

The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

Hoshino, Takahiro; Shigemasu, Kazuo

2008-01-01

176

Improved soundings and error estimates using AIRS/AMSU data  

NASA Astrophysics Data System (ADS)

AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

Susskind, Joel

2006-05-01

177

Improved Soundings and Error Estimates using AIRS/AMSU Data  

NASA Technical Reports Server (NTRS)

AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

Susskind, Joel

2006-01-01

178

Interventions to reduce dosing errors in children: a systematic review of the literature.  

PubMed

Children are a particularly challenging group of patients when trying to ensure the safe use of medicines. The increased need for calculations, dilutions and manipulations of paediatric medicines, together with a need to dose on an individual patient basis using age, gestational age, weight and surface area, means that they are more prone to medication errors at each stage of the medicines management process. It is already known that dose calculation errors are the most common type of medication error in neonatal and paediatric patients. Interventions to reduce the risk of dose calculation errors are therefore urgently needed. A systematic literature review was conducted to identify published articles reporting interventions; 28 studies were found to be relevant. The main interventions found were computerised physician order entry (CPOE) and computer-aided prescribing. Most CPOE and computer-aided prescribing studies showed some degree of reduction in medication errors, with some claiming no errors occurring after implementation of the intervention. However, one study showed a significant increase in mortality after the implementation of CPOE. Further research is needed to investigate outcomes such as mortality and economics. Unit dose dispensing systems and educational/risk management programmes were also shown to reduce medication errors in children. Although it is suggested that 'smart' intravenous pumps can potentially reduce infusion errors in children, there is insufficient information to draw a conclusion because of a lack of research. Most interventions identified were US based, and since medicine management processes are currently different in different countries, there is a need to interpret the information carefully when considering implementing interventions elsewhere. PMID:18035864

Conroy, Sharon; Sweis, Dimah; Planner, Claire; Yeung, Vincent; Collier, Jacqueline; Haines, Linda; Wong, Ian C K

2007-01-01

179

Verification of unfold error estimates in the UFO code  

SciTech Connect

Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

Fehl, D.L.; Biggs, F.

1996-07-01

180

A Bayesian Approach to Systematic Error Correction in Kepler Photometric Time Series  

NASA Astrophysics Data System (ADS)

In order for the Kepler mission to achieve its required 20 ppm photometric precision for 6.5 hr observations of 12th magnitude stars, the Presearch Data Conditioning (PDC) software component of the Kepler Science Processing Pipeline must reduce systematic errors in flux time series to the limit of stochastic noise for errors with time-scales less than three days, without smoothing or over-fitting away the transits that Kepler seeks. The current version of PDC co-trends against ancillary engineering data and Pipeline generated data using essentially a least squares (LS) approach. This approach is successful for quiet stars when all sources of systematic error have been identified. If the stars are intrinsically variable or some sources of systematic error are unknown, LS will nonetheless attempt to explain all of a given time series, not just the part the model can explain well. Negative consequences can include loss of astrophysically interesting signal, and injection of high-frequency noise into the result. As a remedy, we present a Bayesian Maximum A Posteriori (MAP) approach, in which a subset of intrinsically quiet and highly-correlated stars is used to establish the probability density function (PDF) of robust fit parameters in a diagonalized basis. The PDFs then determine a "reasonable” range for the fit parameters for all stars, and brake the runaway fitting that can distort signals and inject noise. We present a closed-form solution for Gaussian PDFs, and show examples using publically available Quarter 1 Kepler data. A companion poster (Van Cleve et al.) shows applications and discusses current work in more detail. Kepler was selected as the 10th mission of the Discovery Program. Funding for this mission is provided by NASA, Science Mission Directorate.

Jenkins, Jon Michael; VanCleve, J.; Twicken, J. D.; Smith, J. C.; Kepler Science Team

2011-01-01

181

Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation  

NASA Technical Reports Server (NTRS)

Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

Morelli, Eugene a.

2006-01-01

182

Error estimates for the Skyrme–Hartree–Fock model  

NASA Astrophysics Data System (ADS)

There are many complementary strategies to estimate the extrapolation errors of a model calibrated in least-squares fits. We consider the Skyrme–Hartree–Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, and dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they consist of binding energies, charge rms radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus 208Pb, the super-heavy element 266Hs, r-process nuclei, and neutron stars.

Erler, J.; Reinhard, P.-G.

2015-03-01

183

Estimating the coverage of mental health programmes: a systematic review  

PubMed Central

Background The large treatment gap for people suffering from mental disorders has led to initiatives to scale up mental health services. In order to track progress, estimates of programme coverage, and changes in coverage over time, are needed. Methods Systematic review of mental health programme evaluations that assess coverage, measured either as the proportion of the target population in contact with services (contact coverage) or as the proportion of the target population who receive appropriate and effective care (effective coverage). We performed a search of electronic databases and grey literature up to March 2013 and contacted experts in the field. Methods to estimate the numerator (service utilization) and the denominator (target population) were reviewed to explore methods which could be used in programme evaluations. Results We identified 15 735 unique records of which only seven met the inclusion criteria. All studies reported contact coverage. No study explicitly measured effective coverage, but it was possible to estimate this for one study. In six studies the numerator of coverage, service utilization, was estimated using routine clinical information, whereas one study used a national community survey. The methods for estimating the denominator, the population in need of services, were more varied and included national prevalence surveys case registers, and estimates from the literature. Conclusions Very few coverage estimates are available. Coverage could be estimated at low cost by combining routine programme data with population prevalence estimates from national surveys. PMID:24760874

De Silva, Mary J; Lee, Lucy; Fuhr, Daniela C; Rathod, Sujit; Chisholm, Dan; Schellenberg, Joanna; Patel, Vikram

2014-01-01

184

The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors  

SciTech Connect

Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter [Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2, Canada and Department of Physics and Astronomy, University of Calgary, 2500 University Drive North West, Calgary, Alberta T2N 1N4 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Medical Physics, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, Alberta T2N 1N4 (Canada) and Department of Oncology, Tom Baker Cancer Centre, 1331-29 Street NW, Calgary, Alberta T2N 4N2 (Canada)

2010-07-15

185

Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers  

NASA Technical Reports Server (NTRS)

Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

2012-01-01

186

Estimating the error in simulation prediction over the design space  

SciTech Connect

This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.

Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)

2003-01-01

187

Convergence and error estimation in free energy calculations using the weighted histogram analysis method  

PubMed Central

The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354

Zhu, Fangqiang; Hummer, Gerhard

2012-01-01

188

Real-Time Parameter Estimation Using Output Error  

NASA Technical Reports Server (NTRS)

Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

Grauer, Jared A.

2014-01-01

189

Method of identifying dynamic multileaf collimator irradiation that is highly sensitive to a systematic MLC calibration error.  

PubMed

In intensity modulated radiotherapy (IMRT), radiation is delivered in a multiple of multileaf collimator (MLC) subfields. A subfield with a small leaf-to-leaf opening is highly sensitive to a leaf-positional error. We introduce a method of identifying and rejecting IMRT plans that are highly sensitive to a systematic MLC gap error (sensitivity to possible random leaf-positional errors is not addressed here). There are two sources of a systematic MLC gap error: centerline mechanical offset (CMO) and, in the case of a rounded end MLC, radiation field offset (RFO). In IMRT planning system, using an incorrect value of RFO introduces a systematic error ARFO that results in all leaf-to-leaf gaps that are either too large or too small by (2*DeltaRFO), whereas assuming that CMO is zero introduces systematic error DeltaCMO that results in all gaps that are too large by DeltaCMO=CMO. We introduce a concept of the average leaf pair Opening (ALPO) that can be calculated from a dynamic MLC delivery file. We derive an analytic formula for a fractional average fluence error resulting from a systematic gap error of Deltax and show that it is inversely proportional to ALPO; explicitly it is equal to Deltax/(ALPO+ 2 * RFO+ epsilon), in which epsilon is generally of the order of 1 mm and Deltax =2 * Delta RFO + CMO. This analytic relationship is verified with independent numerical calculations. PMID:11764025

Zygmanski, P; Kung, J H

2001-11-01

190

A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation  

NASA Technical Reports Server (NTRS)

A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

Simon, Donald L.; Garg, Sanjay

2009-01-01

191

A Numerical Study of A Posteriori Error Estimators for Convection--Diffusion  

E-print Network

A Numerical Study of A Posteriori Error Estimators for Convection--Diffusion Equations Volker John Magdeburg Germany This paper presents a numerical study of a posteriori error estimators for convection error estimators works satisfactory in all tests. Key words: convection dominated problems, a posteriori

John, Volker

192

Synoptic scale forecast skill and systematic errors in the MASS 2.0 model. [Mesoscale Atmospheric Simulation System  

NASA Technical Reports Server (NTRS)

The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.

Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.

1985-01-01

193

Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources  

NASA Technical Reports Server (NTRS)

Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

1999-01-01

194

Nonlocal treatment of systematic errors in the processing of sparse and incomplete sensor data  

SciTech Connect

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse and incomplete sensor data. We present a detailed application of this methodology to the construction of navigation maps from wide-angle sonar sensor data acquired by the HERMIES IIB mobile robot. Our uncertainty approach is explcitly nonlocal. We use a binary labelling scheme and a simple logic for the rule of combination. We then correct erroneous interpretations of the data by analyzing pixel patterns of conflict and by imposing consistent labelling conditions. 9 refs., 6 figs.

Beckerman, M.; Oblow, E.M.

1988-03-01

195

Systematic errors as the cause for an apparent deep water property variability: global analysis of the WOCE and historical hydrographic data  

Microsoft Academic Search

High-quality hydrographic sections occupied during the World Ocean Circulation Experiment (WOCE) have allowed the first estimates to be made of property changes in the deep ocean on a decadal time-scale. The magnitude of the property variability on deep isothermal surfaces (below about 2–3°C) was found to be comparable with the magnitude of possible systematic errors in the data (except for

V. V Gouretski; K Jancke

2000-01-01

196

Error estimation for CFD aeroheating prediction under rarefied flow condition  

NASA Astrophysics Data System (ADS)

Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ? is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ?, compared with two other parameters, Kn? and Ma?Kn?.

Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

2014-12-01

197

Error estimates for the Skyrme-Hartree-Fock model  

E-print Network

There are many complementing strategies to estimate the extrapolation errors of a model which was calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they are: binding energies, charge r.m.s. radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus $^{208}$Pb, the super-heavy element $^{266}$Hs, $r$-process nuclei, and neutron stars.

J. Erler; P. -G. Reinhard

2014-08-01

198

Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction  

NASA Technical Reports Server (NTRS)

The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

2013-01-01

199

Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown  

ERIC Educational Resources Information Center

When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

2014-01-01

200

A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations  

SciTech Connect

An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.

Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

1999-11-03

201

Model Error Estimation for the CPTEC Eta Model  

NASA Technical Reports Server (NTRS)

Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

Tippett, Michael K.; daSilva, Arlindo

1999-01-01

202

A robust SUPG norm a posteriori error estimator for stationary convectiondiffusion equations  

E-print Network

A robust SUPG norm a posteriori error estimator for stationary convection­diffusion equations Available online 14 December 2012 Keywords: Stationary convection­diffusion equations SUPG finite element method Error in SUPG norm A posteriori error estimator Adaptive grid refinement a b s t r a c t A robust

John, Volker

203

An error analysis of the sensorless position estimation for BLDC motors  

Microsoft Academic Search

This paper presents a novel sensorless operation technique for brushless DC (BLDC) motors and a new approach of error analysis for the sensorless BLDC motor drive. The analysis breaks down the estimated position error to its fundamental elements in the BLDC motor drive system. The position estimation error decreases the motor efficiency and produces torque pulsation. In this paper, the

Tae-Hyung Kim; Mehrdad Ehsani

2003-01-01

204

An algorithm for estimation and separation of ephemeris and clock errors in SBAS  

NASA Astrophysics Data System (ADS)

The estimation and separation of ephemeris and clock errors is an integral part of a SBAS (Space Based Augmentation System). Generally, the global solution is based on the full state approach for satellite errors (ephemeris and clock) and station errors, using a large least square estimator; or the other way is to sequentially estimate the ephemeris and clock through a Kalman filter, using a complex model of the satellite dynamics. In this paper, the estimation and separation of ephemeris and clock errors is addressed through a unique approach of combining both the methods. The algorithm employs measurements, which are pre-processed for various errors and known biases. A single difference technique is used to separately estimate the ephemeris and clock components. The ephemeris Kalman filter uses a priori information of ephemeris errors along with measurements through a minimum variance estimator to provide ephemeris error estimate. A similar approach is adopted in the clock error estimation process, to provide clock and clock rate estimates. The algorithm results are presented using simulated data for known errors in ephemeris/clock and subsequent retrieval. This algorithm estimates these errors as corrections to the broadcast Global Positioning System (GPS) navigation data, required by a SBAS user for accuracy improvement.

Mishra, S.; Gupta, R.; Ganeshan, A. S.

2009-10-01

205

Estimation of the sampling interval error for LED measurement with a goniophotometer  

NASA Astrophysics Data System (ADS)

Using a goniophotometer to implant a total luminous flux measurement, an error comes from the sampling interval, especially in the situation for LED measurement. In this work, we use computer calculations to estimate the effect of sampling interval on the measuring the total luminous flux for four typical kinds of LEDs, whose spatial distributions of luminous intensity is similar to those LEDs shown in CIE 127 paper. Four basic kinds of mathematical functions are selected to simulate the distribution curves. Axial symmetric type LED and non-axial symmetric type LED are both take amount of. We consider polar angle sampling interval of 0.5°, 1°, 2°, and 5° respectively in one rotation for axial symmetric type, and consider azimuth angle sampling interval of 18°, 15°, 12°, 10° and 5° respectively for non-axial symmetric type. We noted that the error is strongly related to spatial distribution. However, for common LED light sources the calculation results show that a usage of polar angle sampling interval of 2° and azimuth angle sampling interval of 15° is recommended. The systematic error of sampling interval for a goniophotometer can be controlled at the level of 0.3%. For high precise level, the usage of polar angle sampling interval of 1° and azimuth angle sampling interval of 10° should be used.

Zhao, Weiqiang; Liu, Hui; Liu, Jian

2013-06-01

206

Reduced basis approximation and a posteriori error estimation for the time-dependent viscous Burgers’ equation  

E-print Network

In this paper we present rigorous a posteriori L 2 error bounds for reduced basis approximations of the unsteady viscous Burgers’ equation in one space dimension. The a posteriori error estimator, derived from standard ...

Nguyen, Ngoc Cuong

207

Measurement Error Webinar Series: Estimating usual intake distributions for multivariate dietary variables  

Cancer.gov

Identify challenges in addressing measurement error when modeling multivariate dietary variables such as diet quality indices. Describe statistical modeling techniques to correct for measurement error in estimating multivariate dietary variables.

208

Methods for post-processing eddy correlation data - issues and error estimates  

NASA Astrophysics Data System (ADS)

Seven years of turbulent flux measurements from the Southern Old Aspen site in Saskatchewan, Canada, were analyzed to quantify the effect of systematic biases of data processing on half-hourly fluxes as well as annual estimates of net ecosystem production (NEP). Recently, efforts have intensified to unify the processing of half-hourly fluxes as well as annual carbon sequestration across flux tower networks such as Ameriflux. With the range of ecosystems monitored by a growing number of researchers increasing constantly it is important that biases in results from various sites are kept at a minimum. The newly established Fluxnet-Canada network will take efforts towards unification a step further by using standardized sensors, a common measurement methodology and a single processing software. Last year, at a workshop hosted by the Ameriflux network recommendations for standard processing of eddy correlation data were made, proposing block averaging, planar fit coordinate rotation, and spectral corrections of high frequency data. We analyzed flux data to quantify the effects of following these recommendations. When the processing was changed from our current method (half-hourly coordinate rotation, no high frequency corrections), the annual NEP estimate of 200 g C m-2 yr-1 in 2000 changed by about 10 g C m-2 yr-1. This was close to the random error of the annual NEP. Systematic biases due to gap filling of the annual record were about 50 g C m-2 yr-1, while annual NEP estimates ranged from 150 g C m-2 yr-1 in 1996 to 370 g C m-2 yr-1 in 2001. Differences even smaller than this can be resolved since the systematic biases are very similar for all gap filling methods.

Morgenstern, K.; Black, T. A.; Barr, A. G.; Kljun, N.; Gaumont-Guay, D.; Nesic, Z.

2003-04-01

209

Computer-based estimation and compensation of diametral errors in CNC turning of cantilever bars  

Microsoft Academic Search

This paper aims to introduce a computer-based estimation and compensation method for diametral errors in cantilever bar turning\\u000a without additional hardware requirements. In the error estimation method, the error characteristics of workpieces are determined\\u000a experimentally depending on cutting speed, depth of cut, feed rate, workpiece diameter, length from the chuck and the geometric\\u000a error sum of CNC lathe. An Artificial

Eyüp Sabri Topal; Can Ço?un

210

RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION  

SciTech Connect

The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

1998-06-22

211

Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels  

SciTech Connect

Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.

Nelms, Benjamin E. [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States)] [Canis Lupus LLC, Merrimac, Wisconsin 53561 (United States); Chan, Maria F. [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States)] [Memorial Sloan-Kettering Cancer Center, Basking Ridge, New Jersey 07920 (United States); Jarry, Geneviève; Lemire, Matthieu [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada)] [Hôpital Maisonneuve-Rosemont, Montréal, QC H1T 2M4 (Canada); Lowden, John [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States)] [Indiana University Health - Goshen Hospital, Goshen, Indiana 46526 (United States); Hampton, Carnell [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States)] [Levine Cancer Institute/Carolinas Medical Center, Concord, North Carolina 28025 (United States); Feygelman, Vladimir [Moffitt Cancer Center, Tampa, Florida 33612 (United States)] [Moffitt Cancer Center, Tampa, Florida 33612 (United States)

2013-11-15

212

Power control for the additive white Gaussian noise channel under channel estimation errors  

Microsoft Academic Search

We investigate the time-varying additive white Gaussian noise channel with imperfect side-information. In practical systems, the channel gain may be estimated from a probing signal and estimation errors cannot be avoided. The goal of this paper is to determine a power allocation that a priori incorporates statistical knowledge of the estimation error. This is in contrast to prior work which

Thierry E. Klein; Robert G. Gallager

2001-01-01

213

Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors  

E-print Network

Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors Fabienne Comte, Tabea of nonparametric density estimation in the pile-up model. Adaptive nonparametric estimators are proposed for the pile-up model in its simple form as well as in the case of additional measurement errors. Furthermore

Paris-Sud XI, Université de

214

An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method  

NASA Technical Reports Server (NTRS)

State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

Frisbee, Joseph H., Jr.

2011-01-01

215

Additional results for model-based nonparametric variance estimation for systematic sampling in a  

E-print Network

Additional results for model-based nonparametric variance estimation for systematic sampling. Li Pfizer, Inc.§ July 29, 2009 Abstract Systematic sampling is frequently used in natural resource of systematic sampling, however, is that no direct estimator of the design variance is available. We describe

Francisco-Fernández, Mario

216

Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra  

NASA Astrophysics Data System (ADS)

We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, ?. The spurious deviations in ? produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in ? from quasar absorption lines.

Whitmore, Jonathan B.; Murphy, Michael T.

2015-02-01

217

Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra  

E-print Network

We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium--argon calibration can be tracked with $\\sim$10 m s$^{-1}$ precision over the entire optical wavelength range on scales of both echelle orders ($\\sim$50--100 \\AA) and entire spectrographs arms ($\\sim$1000--3000 \\AA). Using archival spectra from the past 20 years we have probed the supercalibration history of the VLT--UVES and Keck--HIRES spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically $\\pm$200 m s$^{-1}$ per 1000 \\AA. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, $\\alpha$. The spurious deviations in $\\alpha$ produced by the model closely match important aspects of the VLT--UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in $\\alpha$ from quasar absorption lines.

J. B. Whitmore; M. T. Murphy

2014-11-18

218

X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors  

SciTech Connect

Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

2010-07-09

219

Systematic Exploration of the Neutrino Factory Parameter Space including Errors and Correlations  

E-print Network

We discuss in a systematic way the extraction of neutrino masses, mixing angles and leptonic CP violation at neutrino factories. Compared to previous studies we put a special emphasis on improved statistical methods and on the multidimensional nature of the combined fits of the nu_e -> nu_mu, \\bar nu_e -> \\bar nu_mu appearance and nu_mu -> nu_mu, \\bar nu_mu -> \\bar nu_mu disappearance channels. Uncertainties of all involved parameters and statistical errors are included. We find previously ignored correlations in the multidimensional parameter space, leading to modifications in the physics reach, which amount in some cases to one order of magnitude. Including proper statistical errors we determine for all parameters the improved sensitivity limits for various baselines, beam energies, neutrino fluxes and detector masses. Our results allow a comparison of the physics potential for different choices of baseline and beam energy with regard to all involved parameters. In addition we discuss in more detail the problem of parameter degeneracies in measurements of delta_CP.

M. Freund; P. Huber; M. Lindner

2001-05-08

220

Adaptive subset offset for systematic error reduction in incremental digital image correlation  

NASA Astrophysics Data System (ADS)

Digital image correlation (DIC) relies on a high correlation between the intensities in the reference image and the target image. When decorrelation occurs due to large deformation or viewpoint change, incremental DIC is utilized to update the reference image and use the correspondences in this renewed image as the reference points in subsequent DIC computation. As each updated reference point is derived from previous correlation, its location is generally of sub-pixel accuracy. Conventional subset which is centered at the point results in subset points at non-integer positions. Therefore, the acquisition of the intensities of the subset demands interpolation which is proved to introduce additional systematic error. We hereby present adaptive subset offset to slightly translate the subset so that all the subset points fall on integer positions. By this means, interpolation in the updated reference image is totally avoided regardless of the non-integer locations of the reference points. The translation is determined according to the decimal of the reference point location, and the maximum are half a pixel in each direction. Such small translation has no negative effect on the compatibility of the widely used shape functions, correlation functions and the optimization algorithms. The results of the simulation and the real-world experiments show that adaptive subset offset produces lower measurement error than the conventional method in incremental DIC when applied in both 2D-DIC and 3D-DIC.

Zhou, Yihao; Sun, Chen; Chen, Jubing

2014-04-01

221

Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska  

NASA Astrophysics Data System (ADS)

Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

Bonin, J. A.; Chambers, D. P.

2012-12-01

222

Error estimates of the DtN finite element method for the exterior Helmholtz problem  

Microsoft Academic Search

A priori error estimates are established for the DtN (Dirichlet-to-Neumann) finite element method applied to the exterior Helmholtz problem. The error estimates include the effect of truncation of the DtN boundary condition as well as that of the finite element discretization. A property of the Hankel functions which plays an important role in the proof of the error estimates is

Daisuke Koyama

2007-01-01

223

Speech enhancement using a minimum mean-square error log-spectral amplitude estimator  

Microsoft Academic Search

In this correspondence we derive a short-time spectral amplitude (STSA) estimator for speech signals which minimizes the mean-square error of the log-spectra (i.e., the original STSA and its estimator) and examine it in enhancing noisy speech. This estimator is also compared with the corresponding minimum mean-square error STSA estimator derived previously. It was found that the new estimator is very

Y. Ephraim; D. Malah

1985-01-01

224

Evaluating concentration estimation errors in ELISA microarray experiments  

SciTech Connect

Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

2005-01-26

225

Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors  

NASA Astrophysics Data System (ADS)

GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.

Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.

2013-05-01

226

Effects of resighting errors on capture-resight estimates for neck-banded Canada geese  

USGS Publications Warehouse

Biologists who study neck-banded Canada Geese (Branta canadensis) have used capture and resighting histories to estimate annual resighting rates, survival rates and the number of marked birds in the population. Resighting errors were associated with 9.4% (n = 155) of the birds from a sample of Canada Geese neckbanded in the Mississippi flyway, 1974-1987, and constituted 3.0% (n = 208) of the resightings. Resighting errors significantly reduced estimated resighting rates and significantly increased estimated numbers of marked geese in the sample. Estimates of survival rates were not significantly affected by resighting errors. Recommendations are offered for using neck-band characters that may reduce resighting errors.

Weiss, N.T.; Samuel, M.D.; Rusch, D.H.; Caswell, F.D.

1991-01-01

227

Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.  

NASA Technical Reports Server (NTRS)

This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

2002-01-01

228

Error estimation and adaptivity for nonlocal damage models  

Microsoft Academic Search

Nonlocal damage models are typically used to model failure of quasi-brittle materials. Due to brittleness, the choice of a particular model or set of parameters can have a crucial influence on the structural response. To assess this influence, it is essential to keep finite element discretization errors under control. If not, the effect of these errors on the result of

Antonio Rodr??guez-Ferran; Antonio Huerta

2000-01-01

229

An estimate of asthma prevalence in Africa: a systematic analysis  

PubMed Central

Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge. PMID:24382846

Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

2013-01-01

230

Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models  

PubMed Central

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2014-01-01

231

Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.  

PubMed

Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

2013-01-01

232

Systematic errors in regional climate model RegCM over Europe and sensitivity to variations in PBL parameterizations  

NASA Astrophysics Data System (ADS)

Systematic errors in near-surface temperature (T2m), total cloud cover (CLD), shortwave albedo (ALB) and surface net longwave (SNL) and shortwave energy flux (SNS) are detected in simulations of RegCM on 50 km resolution over the European CORDEX domain when forced with ERA-Interim reanalysis. Simulated T2m is compared to CRU 3.0 and other variables to GEWEX-SRB 3.0 dataset. Most of systematic errors found in SNL and SNS are consistent with errors in T2m, CLD and ALB: they include prevailing negative errors in T2m and positive errors in CLD present during most of the year. Errors in T2m and CLD can be associated with the overestimation of SNL and SNS in most simulations. Impact of errors in albedo are primarily confined to north Africa, where e.g. underestimation of albedo in JJA is consistent with associated surface heating and positive SNS and T2m errors. Sensitivity to the choice of the PBL scheme and various parameters in PBL schemes is examined from an ensemble of 20 simulations. The recently implemented prognostic PBL scheme performs over Europe with a mixed success when compared to standard diagnostic scheme with a general increase of errors in T2m and CLD over all of the domain. Nevertheless, the improvements in T2m can be found in e.g. north-eastern Europe during DJF and western Europe during JJA where substantial warm biases existed in simulations with the diagnostic scheme. The most detectable impact, in terms of the JJA T2m errors over western Europe, comes form the variation in the formulation of mixing length. In order to reduce the above errors an update of the RegCM albedo values and further work in customizing PBL scheme is suggested.

Güttler, I.

2012-04-01

233

A measurement of the systematic astrometric error in GeMS and the short-term astrometric precision in ShaneAO  

NASA Astrophysics Data System (ADS)

We measure the long-term systematic component of the astrometric error in the GeMS MCAO system as a function of field radius and Ks magnitude. The experiment uses two epochs of observations of NGC 1851 separated by one month. The systematic component is estimated for each of three field of view cases (15'' radius, 30'' radius, and full field) and each of three distortion correction schemes: 8 DOF/chip + local distortion correction (LDC), 8 DOF/chip with no LDC, and 4 DOF/chip with no LDC. For bright, unsaturated stars with 13 < Ks < 16, the systematic component is < 0.2, 0.3, and 0.4 mas, respectively, for the 15'' radius, 30'' radius, and full field cases, provided that an 8 DOF/chip distortion correction with LDC (for the full-field case) is used to correct distortions. An 8 DOF/chip distortion-correction model always outperforms a 4 DOF/chip model, at all field positions and magnitudes and for all field-of-view cases, indicating the presence of high-order distortion changes. Given the order of the models needed to correct these distortions (~8 DOF/chip or 32 degrees of freedom total), it is expected that at least 25 stars per square arcminute would be needed to keep systematic errors at less than 0.3 milliarcseconds for multi-year programs. We also estimate the short-term astrometric precision of the newly upgraded Shane AO system with undithered M92 observations. Using a 6-parameter linear transformation to register images, the system delivers ~0.3 mas astrometric error over short-term observations of 2-3 minutes.

Ammons, S. M.; Neichel, Benoit; Lu, Jessica; Gavel, Donald T.; Srinath, Srikar; McGurk, Rosalie; Rudy, Alex; Rockosi, Connie; Marois, Christian; Macintosh, Bruce; Savransky, Dmitry; Galicher, Raphael; Bendek, Eduardo; Guyon, Olivier; Marin, Eduardo; Garrel, Vincent; Sivo, Gaetano

2014-08-01

234

Pedigree error due to extra-pair reproduction substantially biases estimates of inbreeding depression.  

PubMed

Understanding the evolutionary dynamics of inbreeding and inbreeding depression requires unbiased estimation of inbreeding depression across diverse mating systems. However, studies estimating inbreeding depression often measure inbreeding with error, for example, based on pedigree data derived from observed parental behavior that ignore paternity error stemming from multiple mating. Such paternity error causes error in estimated coefficients of inbreeding (f) and reproductive success and could bias estimates of inbreeding depression. We used complete "apparent" pedigree data compiled from observed parental behavior and analogous "actual" pedigree data comprising genetic parentage to quantify effects of paternity error stemming from extra-pair reproduction on estimates of f, reproductive success, and inbreeding depression in free-living song sparrows (Melospiza melodia). Paternity error caused widespread error in estimates of f and male reproductive success, causing inbreeding depression in male and female annual and lifetime reproductive success and juvenile male survival to be substantially underestimated. Conversely, inbreeding depression in adult male survival tended to be overestimated when paternity error was ignored. Pedigree error stemming from extra-pair reproduction therefore caused substantial and divergent bias in estimates of inbreeding depression that could bias tests of evolutionary theories regarding inbreeding and inbreeding depression and their links to variation in mating system. PMID:24171712

Reid, Jane M; Keller, Lukas F; Marr, Amy B; Nietlisbach, Pirmin; Sardell, Rebecca J; Arcese, Peter

2014-03-01

235

Interventions to reduce wrong blood in tube errors in transfusion: a systematic review.  

PubMed

This systematic review addresses the issue of wrong blood in tube (WBIT). The objective was to identify interventions that have been implemented and the effectiveness of these interventions to reduce WBIT incidence in red blood cell transfusion. Eligible articles were identified through a comprehensive search of The Cochrane Library, MEDLINE, EMBASE, Cinahl, BNID, and the Transfusion Evidence Library to April 2013. Initial search criteria were wide including primary intervention or observational studies, case reports, expert opinion, and guidelines. There was no restriction by study type, language, or status. Publications before 1995, reviews or reports of a secondary nature, studies of sampling errors outwith transfusion, and articles involving animals were excluded. The primary outcome was a reduction in errors. Study characteristics, outcomes measured, and methodological quality were extracted by 2 authors independently. The principal method of analysis was descriptive. A total of 12,703 references were initially identified. Preliminary secondary screening by 2 reviewers reduced articles for detailed screening to 128 articles. Eleven articles were eventually identified as eligible, resulting in 9 independent studies being included in the review. The overall finding was that all the identified interventions reduced WBIT incidence. Five studies measured the effect of a single intervention, for example, changes to blood sample labeling, weekly feedback, handwritten transfusion requests, and an electronic transfusion system. Four studies reported multiple interventions including education, second check of ID at sampling, and confirmatory sampling. It was not clear which intervention was the most effective. Sustainability of the effectiveness of interventions was also unclear. Targeted interventions, either single or multiple, can lead to a reduction in WBIT; but the sustainability of effectiveness is uncertain. Data on the pre- and postimplementation of interventions need to be collected in future trials to demonstrate effectiveness, and comparative studies are needed of different interventions. PMID:24075096

Cottrell, Susan; Watson, Douglas; Eyre, Toby A; Brunskill, Susan J; Dorée, Carolyn; Murphy, Michael F

2013-10-01

236

A POSTERIORI ERROR ESTIMATES FOR THE CRANKNICOLSON METHOD  

E-print Network

­Nicolson­Galerkin reconstruction, a pos- teriori error analysis. The first author was partially supported by a `Pythagoras' grant Development Host Site, HPMD-CT-2001-00121 and the program Pythagoras of EPEAEK II. The third author

Akrivis, Georgios

237

Autonomous error bounding of position estimates from GPS and Galileo  

E-print Network

In safety-of-life applications of satellite-based navigation, such as the guided approach and landing of an aircraft, the most important question is whether the navigation error is tolerable. Although differentially corrected ...

Temple, Thomas J. (Thomas John)

2006-01-01

238

On error estimator and adaptivity in the meshless Galerkin boundary node method  

NASA Astrophysics Data System (ADS)

In this article, a simple a posteriori error estimator and an effective adaptive refinement process for the meshless Galerkin boundary node method (GBNM) are presented. The error estimator is formulated by the difference between the GBNM solution itself and its L 2-orthogonal projection. With the help of a localization technique, the error is estimated by easily computable local error indicators and hence an adaptive algorithm for h-adaptivity is formulated. The convergence of this adaptive algorithm is verified theoretically in Sobolev spaces. Numerical examples involving potential and elasticity problems are also provided to illustrate the performance and usefulness of this adaptive meshless method.

Li, Xiaolin

2012-07-01

239

Aerial measurement error with a dot planimeter: Some experimental estimates  

NASA Technical Reports Server (NTRS)

A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

Yuill, R. S.

1971-01-01

240

Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors  

NASA Astrophysics Data System (ADS)

Monthly-mean precipitation, mean (TAVG), maximum (TMAX) and minimum (TMIN) surface air temperatures, and cloudiness from the CORDEX-Africa regional climate model (RCM) hindcast experiment are evaluated for model skill and systematic biases. All RCMs simulate basic climatological features of these variables reasonably, but systematic biases also occur across these models. All RCMs show higher fidelity in simulating precipitation for the west part of Africa than for the east part, and for the tropics than for northern Sahara. Interannual variation in the wet season rainfall is better simulated for the western Sahel than for the Ethiopian Highlands. RCM skill is higher for TAVG and TMAX than for TMIN, and regionally, for the subtropics than for the tropics. RCM skill in simulating cloudiness is generally lower than for precipitation or temperatures. For all variables, multi-model ensemble (ENS) generally outperforms individual models included in ENS. An overarching conclusion in this study is that some model biases vary systematically for regions, variables, and metrics, posing difficulties in defining a single representative index to measure model fidelity, especially for constructing ENS. This is an important concern in climate change impact assessment studies because most assessment models are run for specific regions/sectors with forcing data derived from model outputs. Thus, model evaluation and ENS construction must be performed separately for regions, variables, and metrics as required by specific analysis and/or assessments. Evaluations using multiple reference datasets reveal that cross-examination, quality control, and uncertainty estimates of reference data are crucial in model evaluations.

Kim, J.; Waliser, Duane E.; Mattmann, Chris A.; Goodale, Cameron E.; Hart, Andrew F.; Zimdars, Paul A.; Crichton, Daniel J.; Jones, Colin; Nikulin, Grigory; Hewitson, Bruce; Jack, Chris; Lennard, Christopher; Favre, Alice

2014-03-01

241

Cross-Validation, the Jackknife, and the Bootstrap: Excess Error Estimation in Forward Logistic Regression  

Microsoft Academic Search

Given a prediction rule based on a set of patients, what is the probability of incorrectly predicting the outcome of a new patient? Call this probability the true error. An optimistic estimate is the apparent error, or the proportion of incorrect predictions on the original set of patients, and it is the goal of this article to study estimates of

Gail Gong

1986-01-01

242

Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers  

ERIC Educational Resources Information Center

Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

Kim, ChangHwan; Tamborini, Christopher R.

2012-01-01

243

A Non-Orthogonal SVD-based Decomposition for Phase Invariant Error-Related Potential Estimation  

E-print Network

of the variance of the observed signal with fewer components in the expansion. Index Terms-- Hilbert transformA Non-Orthogonal SVD-based Decomposition for Phase Invariant Error-Related Potential Estimation Ronald Phlypo, Nisrine Jrad, Sandra Rousseau and Marco Congedo Abstract-- The estimation of the Error

Paris-Sud XI, Université de

244

Error Analysis for Silhouette--Based 3D Shape Estimation from Multiple Views  

E-print Network

D objects are the components of virtual worlds in a variety of 3D multimedia applications like 3D­ mum variance estimator, the individual error variances are required. (ii) The choice of the numberError Analysis for Silhouette--Based 3D Shape Estimation from Multiple Views Wolfgang Niem Institut

245

An hpError Estimate for an Unfitted Discontinuous Galerkin Method  

E-print Network

An hp­Error Estimate for an Unfitted Discontinuous Galerkin Method Applied to Elliptic Interface-error estimate for an unfitted Discontinuous Galerkin Method applied to elliptic interface problems Ralf Massjung Abstract In this article an unfitted Discontinuous Galerkin Method is proposed to discretize elliptic in

246

Adjoint-Based Error Estimation and Mesh Adaptation for Hybridized Discontinuous Galerkin Methods  

E-print Network

Adjoint-Based Error Estimation and Mesh Adaptation for Hybridized Discontinuous Galerkin Methods to investigate the efficiency of the global error estimate. Keywords: discontinuous Galerkin methods-called discontinuous Galerkin methods have attracted interest. Despite their popular advantages -- high-order accuracy

247

An hp--Error Estimate for an Unfitted Discontinuous Galerkin Method  

E-print Network

An hp--Error Estimate for an Unfitted Discontinuous Galerkin Method Applied to Elliptic Interface­error estimate for an unfitted Discontinuous Galerkin Method applied to elliptic interface problems Ralf Massjung Abstract In this article an unfitted Discontinuous Galerkin Method is proposed to discretize elliptic in

248

ERROR ESTIMATES FOR THE RUNGE-KUTTA DISCONTINUOUS GALERKIN METHOD FOR THE TRANSPORT EQUATION WITH  

E-print Network

ERROR ESTIMATES FOR THE RUNGE-KUTTA DISCONTINUOUS GALERKIN METHOD FOR THE TRANSPORT EQUATION-Kutta discontinuous Galerkin method of order two. We take an initial data which has compact support and is smooth. In this paper, we present the first error estimates for the Runge-Kutta discontinuous Galerkin (RKDG) method

Guzmán, Johnny

249

UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS  

EPA Science Inventory

Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

250

Treatment of systematic errors in the processing of wide-angle sonar sensor data for robotic navigation  

Microsoft Academic Search

A methodology has been developed for the treatment of systematic errors that arise in the processing of sparse sensor data. A detailed application of this methodology to the construction, from wide-angle sonar sensor data, of navigation maps for use in autonomous robotic navigation is presented. In the methodology, a four-valued labeling scheme and a simple logic for label combination are

M. Beckerman; E. M. Oblow

1990-01-01

251

Review Paper: The Effect of Electronic Prescribing on Medication Errors and Adverse Drug Events: A Systematic Review  

Microsoft Academic Search

The objective of this systematic review is to analyse the relative risk reduction on medication error and adverse drug events (ADE) by computerized physician order entry systems (CPOE). We included controlled field studies and pretest-posttest studies, evaluating all types of CPOE systems, drugs and clinical settings. We present the results in evidence tables, calculate the risk ratio with 95% confidence

Elske Ammenwerth; Petra Schnell-Inderst; Christof Machan; Uwe Siebert

2008-01-01

252

The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements  

SciTech Connect

Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

Anderson, K.K.

1994-05-01

253

Fast estimation of discretization error for FE problems solved by domain decomposition  

NASA Astrophysics Data System (ADS)

This paper presents a strategy for a posteriori error estimation for substructured problems solved by non-overlapping domain decomposition methods. We focus on global estimates of the discretization error obtained through the error in constitutive relation for linear mechanical problems. Our method allows to compute error estimate in a fully parallel way for both primal (BDD) and dual (FETI) approaches of non-overlapping domain decomposition whatever the state (converged or not) of the associated iterative solver. Results obtained on an academic problem show that the strategy we propose is efficient in the sense that correct estimation is obtained with fully parallel computations; they also indicate that the estimation of the discretization error reaches sufficient precision in very few iterations of the domain decomposition solver, which enables to consider highly effective adaptive computational strategies.

Parret-Fréaud, A.; Rey, C.; Gosselet, P.; Feyel, F.

2010-12-01

254

Effect of errors in genetic and environmental variances on the BLUP estimates of sire breeding values  

E-print Network

in the breeding value estimates, it is recommended to use a method for the variance components estimation, evenNote Effect of errors in genetic and environmental variances on the BLUP estimates of sire breeding over the genetic variance or of the heritability value (h2). The bias in the breeding value estimates

Paris-Sud XI, Université de

255

Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery  

NASA Technical Reports Server (NTRS)

Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

Martin, D. L.; Perry, M. J.

1994-01-01

256

Estimation of finite population parameters with auxiliary information and response error.  

PubMed

We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error. PMID:25089123

González, L M; Singer, J M; Stanek, E J

2014-10-01

257

Goal-oriented explicit residual-type error estimates in XFEM  

NASA Astrophysics Data System (ADS)

A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

2013-08-01

258

Effect of calibration errors on Bayesian parameter estimation for gravitational wave signals from inspiral binary systems in the advanced detectors era  

E-print Network

By 2015 the advanced versions of the gravitational-wave detectors Virgo and LIGO will be online. They will collect data in coincidence with enough sensitivity to potentially deliver multiple detections of gravitation waves from inspirals of compact-object binaries. This work is focused on understanding the effects introduced by uncertainties in the calibration of the interferometers. We consider plausible calibration errors based on estimates obtained during LIGO's fifth and Virgo's third science runs, which include frequency-dependent amplitude errors of $\\sim 10%$ and frequency-dependent phase errors of $\\sim 3$ degrees in each instrument. We quantify the consequences of such errors estimating the parameters of inspiraling binaries. We find that the systematics introduced by calibration errors on the inferred values of the chirp mass and mass ratio are smaller than 20% of the statistical measurement uncertainties in parameter estimation for 90% of signals in our mock catalog. Meanwhile, the calibration-induced systematics in the inferred sky location of the signal are smaller than $\\sim 50%$ of the statistical uncertainty. We thus conclude that calibration-induced errors at this level are not a significant detriment to accurate parameter estimation.

Salvatore Vitale; Walter Del Pozzo; Tjonnie G. F. Li; Chris Van Den Broeck; Ilya Mandel; Ben Aylott; John Veitch

2012-03-08

259

DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS  

SciTech Connect

We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

Giuppone, C. A.; Beauge, C. [Observatorio Astronomico, Universidad Nacional de Cordoba, Cordoba (Argentina); Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A. [Instituto de Astronomia, Geofisica e Ciencias Atmosfericas, Universidade de Sao Paulo, Sao Paulo (Brazil)

2009-07-10

260

Error Estimating Codes for Insertion and Deletion Jiwei Huang  

E-print Network

probability than bit flipping errors. Our idEEC design can build upon any existing EEC scheme. The basic idea or classroom use is granted without fee provided that copies are not made or distributed for profit in the packet flipped during the transmission (i.e., the BER). Such codes are in general stronger ­ and a bit

Lall, Ashwin

261

Estimating and bounding aggregations in databases with referential integrity errors  

Microsoft Academic Search

Database integration builds on tables coming from multiple databases by creating a single view of all these data. Each database has different tables, columns with similar content across databases and different referential integrity constraints. Thus, a query in an integrated database is likely to involve tables and columns with referential integrity errors. In a data warehouse environment, even though the

Javier García-garcía; Carlos Ordonez

2008-01-01

262

Estimating Precipitation Errors Using Spaceborne Surface Soil Moisure Retrievals  

Technology Transfer Automated Retrieval System (TEKTRAN)

Limitations in the availability of ground-based rain gauge data currently hamper our ability to quantify errors in global precipitation products over data-poor areas of the world. Over land, these limitations may be eased by approaches based on interpreting the degree of dynamic consistency existin...

263

Error Estimates Derived from the Data for Least-Squares Spline Fitting  

SciTech Connect

The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

Jerome Blair

2007-06-25

264

Estimating satellite salinity errors for assimilation of Aquarius and SMOS data into climate models  

NASA Astrophysics Data System (ADS)

Constraining dynamical systems with new information from ocean measurements, including observations of sea surface salinity (SSS) from Aquarius and SMOS, requires careful consideration of data errors that are used to determine the importance of constraints in the optimization. Here such errors are derived by comparing satellite SSS observations from Aquarius and SMOS with ocean model output and in situ data. The associated data error variance maps have a complex spatial pattern, ranging from less than 0.05 in the open ocean to 1-2 (units of salinity variance) along the coasts and high latitude regions. Comparing the data-model misfits to the data errors indicates that the Aquarius and SMOS constraints could potentially affect estimated SSS values in several ocean regions, including most tropical latitudes. In reference to the Aquarius error budget, derived errors are less than the total allocation errors for the Aquarius mission accuracy requirements in low and midlatitudes, but exceed allocation errors in high latitudes.

Vinogradova, Nadya T.; Ponte, Rui M.; Fukumori, Ichiro; Wang, Ou

2014-08-01

265

Space-Time Error Representation and Estimation in Navier-Stokes Calculations  

NASA Technical Reports Server (NTRS)

The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

Barth, Timothy J.

2006-01-01

266

Multiplicative errors in the galaxy power spectrum: self-calibration of unknown photometric systematics for precision cosmology  

NASA Astrophysics Data System (ADS)

We develop a general method to `self-calibrate' observations of galaxy clustering with respect to systematics associated with photometric calibration errors. We first point out the danger posed by the multiplicative effect of calibration errors, where large-angle error propagates to small scales and may be significant even if the large-scale information is cleaned or not used in the cosmological analysis. We then propose a method to measure the arbitrary large-scale calibration errors and use these measurements to correct the small-scale (high-multipole) power which is most useful for constraining the majority of cosmological parameters. We demonstrate the effectiveness of our approach on synthetic examples and briefly discuss how it may be applied to real data.

Shafer, Daniel L.; Huterer, Dragan

2015-03-01

267

Errors in Electrocardiographic Parameter Estimation from Standard Leadsets  

E-print Network

: Electrocardiographic parameter estimation Key words: electrocardiography, body surface potential mapping, angioplasty underwent coronary angioplasty (PTCA). In the case of ST segment deviations, for example, in one study

MacLeod, Rob S.

268

Estimation of dynamic alignment errors in shipboard firecontrol systems  

Microsoft Academic Search

A problem in fire control systems is the estimation of the relative alignment between remotely located target sensor and weapon coordinate frames. This paper describes and analyzes a shipboard system concept for estimating initial misalignments and compensating ships dynamic bending and flexure. The concept uses miniature strapdown inertial sensor assemblies at remote sensor\\/weapon stations to monitor instantaneous differences in rotational

B. H. Browne; D. H. Lackowski

1976-01-01

269

A posteriori error estimates for the Johnson–Nédélec FEM–BEM coupling  

PubMed Central

Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h?h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches. PMID:22347772

Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

2012-01-01

270

STATISTICAL MODEL OF SYSTEMATIC ERRORS: AN ASSESSMENT OF THE BA-CU AND CU-Y PHASE DIAGRAM  

E-print Network

the quality of the fit. Keywords random factor, mixed model, estimation of variance components, maximum experimental errors. The application of a method devised in mathematical statistics, i.e., The Estimation of Variance Components, allows us to treat this problem. In the present work, the use of this method

Rudnyi, Evgenii B.

271

The bootstrap method in space physics: Error estimation for the minimum variance analysis  

NASA Astrophysics Data System (ADS)

The minimum variance analysis technique introduced by Sonnerup and Cahill (1967) is a useful tool in space physics. However the statistical errors appearing in this method are difficult to estimate accurately because of the complicated form of the eigenvalue decomposition. To deal with this problem, this paper introduces the bootstrap method (Efron, 1979) which replaces analytical solutions with repeated simple calculations. We apply this method to the estimation of the statistical errors in the minimum variance direction and the average component in the minimum variance direction, and show that this method accurately estimates the errors.

Kawano, H.; Higuchi, T.

1995-02-01

272

A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint  

NASA Technical Reports Server (NTRS)

This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

Barth, Timothy

2004-01-01

273

Error estimation and adaptive mesh refinement for parallel analysis of shell structures  

NASA Technical Reports Server (NTRS)

The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

1994-01-01

274

Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems  

NASA Technical Reports Server (NTRS)

Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

Harwit, M.

1977-01-01

275

The estimation error covariance matrix for the ideal state reconstructor with measurement noise  

NASA Technical Reports Server (NTRS)

A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

Polites, Michael E.

1988-01-01

276

Error Rate Based SNR Estimation in the Railway Satellite Communication Environment  

Microsoft Academic Search

The potential of an SNR estimation technique for use in links subject to multipath propagation and periodic shadowing is investigated. The proposed SNR estimator relies on the bit error rate of un-coded pilot sequences. The estimator's performance in the presence of Ricean fading, periodic shadowing and rain attenuation is analyzed by means of simulations. The proposed technique is shown to

Barry G. Evans

2008-01-01

277

Mesoscale predictability and background error convariance estimation through ensemble forecasting  

E-print Network

Over the past decade, ensemble forecasting has emerged as a powerful tool for numerical weather prediction. Not only does it produce the best estimate of the state of the atmosphere, it also could quantify the uncertainties associated with the best...

Ham, Joy L

2002-01-01

278

Dust-Induced Systematic Errors in Ultraviolet-Derived Star Formation Rates  

E-print Network

Rest-frame far-ultraviolet (FUV) luminosities form the `backbone' of our understanding of star formation at all cosmic epochs. These luminosities are typically corrected for dust by assuming that the tight relationship between the UV spectral slopes and the FUV attenuations of starburst galaxies applies for all star-forming galaxies. Data from seven independent UV experiments demonstrates that quiescent, `normal' star-forming galaxies deviate substantially from the starburst galaxy spectral slope-attenuation correlation, in the sense that normal galaxies are redder than starbursts. Spatially resolved data for the Large Magellanic Cloud suggests that dust geometry and properties, coupled with a small contribution from older stellar populations, cause deviations from the starburst galaxy spectral slope-attenuation correlation. Folding in data for starbursts and ultra-luminous infrared galaxies, it is clear that neither rest-frame UV-optical colors nor UV/H-alpha significantly help in constraining the UV attenuation. These results argue that the estimation of SF rates from rest-frame UV and optical data alone is subject to large (factors of at least a few) systematic uncertainties because of dust, which cannot be reliably corrected for using only UV/optical diagnostics.

Eric F. Bell

2002-05-24

279

On the error incurred using the bootstrap variance estimate when constructing confidence intervals for quantiles  

Microsoft Academic Search

We show that the coverage error of confidence intervals and level error of hypothesis tests for population quantiles constructed using the bootstrap estimate of sample quantile variance is of precise order n-1\\/2 in both one- and two-sided cases. This contrasts markedly with more classical problems, where the error is of order n-1\\/2 in the one-sided case, but n-1 in the

Peter Hall; Michael A. Martin

1991-01-01

280

On residual-based a posteriori error estimation in hp-FEM  

Microsoft Academic Search

A family , 2 [0; 1], of residual-based error indicators for the hp-version ofthenite element method is presented and analyzed. Upper and lower bounds forthe error indicators are established. To do so, the well-known Clement\\/ScottZhanginterpolation operator is generalized to the hp-context and new polynomialinverse estimates are presented. An hp-adaptive strategy is proposed. Numericalexamples illustrate the performance of the error indicators

Jens Markus Melenk; Barbara I. Wohlmuth

2001-01-01

281

A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems  

NASA Technical Reports Server (NTRS)

This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

Larson, Mats G.; Barth, Timothy J.

1999-01-01

282

Multiclass Bayes error estimation by a feature space sampling technique  

NASA Technical Reports Server (NTRS)

A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

Mobasseri, B. G.; Mcgillem, C. D.

1979-01-01

283

How well can we estimate error variance of satellite precipitation data around the world?  

NASA Astrophysics Data System (ADS)

Providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating the square difference prediction of satellite precipitation (hereafter used synonymously with "error variance") using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. Building on a suite of recent studies that have developed the error variance models, the goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as a function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the error variance of satellite precipitation products. The regression approach yielded good performance skill with high correlation between simulated and observed error variances. The correlation ranged from 0.46 to 0.98 during the independent validation period. In most cases (~ 85% of the scenarios), the correlation was higher than 0.72. The error variance models also captured the spatial distribution of observed error variance adequately for all study regions while producing unbiased residual error. The approach is promising for regions where missed precipitation is not a common occurrence in satellite precipitation estimation. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.

Gebregiorgis, Abebe S.; Hossain, Faisal

2015-03-01

284

RELATING ERROR BOUNDS FOR MAXIMUM CONCENTRATION ESTIMATES TO DIFFUSION METEOROLOGY UNCERTAINTY (JOURNAL VERSION)  

EPA Science Inventory

The paper relates the magnitude of the error bounds of data, used as inputs to a Gaussian dispersion model, to the magnitude of the error bounds of the model output. The research addresses the uncertainty in estimating the maximum concentrations from elevated buoyant sources duri...

285

Estimation of the Mutation Rate during Error-prone Polymerase Chain Reaction  

E-print Network

Estimation of the Mutation Rate during Error-prone Polymerase Chain Reaction Dai Wang1 , Cheng-prone polymerase chain reaction (PCR) is widely used to introduce point mutations during in vitro evolution step of in vitro evolution is mutagenesis. Error-prone polymerase chain reaction (PCR) (Leung et al

Sun, Fengzhu - Sun, Fengzhu

286

Influence functions and goal-oriented error estimation for finite element analysis of shell structures  

Microsoft Academic Search

SUMMARY In this paper, we first present a consistent procedure to establish influence functions for the finite element analysis of shell structures, where the influence function can be for any linear quantity of engineering interest. We then design some goal-oriented error measures that take into account the cancellation effect of errors over the domain to overcome the issue of over-estimation.

Thomas Grätsch; Klaus-Jürgen Bathe

2005-01-01

287

A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings  

ERIC Educational Resources Information Center

The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error

Lee, Guemin; Lewis, Daniel M.

2008-01-01

288

Errors and parameter estimation in precipitation-runoff modeling 2. Case study.  

USGS Publications Warehouse

A case study is presented which illustrates some of the error analysis, sensitivity analysis, and parameter estimation procedures reviewed in the first part of this paper. It is shown that those procedures, most of which come from statistical nonlinear regression theory, are invaluable in interpreting errors in precipitation-runoff modeling and in identifying appropriate calibration strategies. -Author

Troutman, B.M.

1985-01-01

289

MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates  

NASA Technical Reports Server (NTRS)

MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

2011-01-01

290

Estimating smooth distribution function in the presence of heteroscedastic measurement errors  

PubMed Central

Measurement error occurs in many biomedical fields. The challenges arise when errors are heteroscedastic since we literally have only one observation for each error distribution. This paper concerns the estimation of smooth distribution function when data are contaminated with heteroscedastic errors. We study two types of methods to recover the unknown distribution function: a Fourier-type deconvolution method and a simulation extrapolation (SIMEX) method. The asymptotics of the two estimators are explored and the asymptotic pointwise confidence bands of the SIMEX estimator are obtained. The finite sample performances of the two estimators are evaluated through a simulation study. Finally, we illustrate the methods with medical rehabilitation data from a neuro-muscular electrical stimulation experiment. PMID:20160998

Wang, Xiao-Feng; Fan, Zhaozhi; Wang, Bin

2009-01-01

291

Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.  

ERIC Educational Resources Information Center

Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

Olejnik, Stephen F.; Algina, James

1987-01-01

292

PRACTICAL ERROR ESTIMATES FOR REYNOLDS' LUBRICATION APPROXIMATION AND ITS HIGHER ORDER CORRECTIONS  

E-print Network

PRACTICAL ERROR ESTIMATES FOR REYNOLDS' LUBRICATION APPROXIMATION AND ITS HIGHER ORDER CORRECTIONS JON WILKENING Abstract. Reynolds' lubrication approximation is used extensively to study flows an asymptotic series rather than a convergent series. Key words. Incompressible flow, lubrication theory

Wilkening, Jon

293

Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling  

E-print Network

A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model ...

Locatelli, R.

294

Spatio-temporal Error on the Discharge Estimates for the SWOT Mission  

NASA Astrophysics Data System (ADS)

The Surface Water and Ocean Topography (SWOT) mission measures two key quantities over rivers: water surface elevation and slope. Water surface elevation from SWOT will have a vertical accuracy, when averaged over approximately one square kilometer, on the order of centimeters. Over reaches from 1-10 km long, SWOT slope measurements will be accurate to microradians. Elevation (depth) and slope offer the potential to produce discharge as a derived quantity. Estimates of instantaneous and temporally integrated discharge from SWOT data will also contain a certain degree of error. Two primary sources of measurement error exist. The first is the temporal sub-sampling of water elevations. For example, SWOT will sample some locations twice in the 21-day repeat cycle. If these two overpasses occurred during flood stage, an estimate of monthly discharge based on these observations would be much higher than the true value. Likewise, if estimating maximum or minimum monthly discharge, in some cases, SWOT may miss those events completely. The second source of measurement error results from the instrument's capability to accurately measure the magnitude of the water surface elevation. How this error affects discharge estimates depends on errors in the model used to derive discharge from water surface elevation. We present a global distribution of estimated relative errors in mean annual discharge based on a power law relationship between stage and discharge. Additionally, relative errors in integrated and average instantaneous monthly discharge associated with temporal sub-sampling over the proposed orbital tracks are presented for several river basins.

Biancamaria, S.; Alsdorf, D. E.; Andreadis, K. M.; Clark, E.; Durand, M.; Lettenmaier, D. P.; Mognard, N. M.; Oudin, Y.; Rodriguez, E.

2008-12-01

295

Error estimates for abstract linear–quadratic optimal control problems using proper orthogonal decomposition  

Microsoft Academic Search

In this paper we investigate POD discretizations of abstract linear–quadratic optimal control problems with control constraints.\\u000a We apply the discrete technique developed by Hinze (Comput. Optim. Appl. 30:45–61, 2005) and prove error estimates for the corresponding discrete controls, where we combine error estimates for the state and the\\u000a adjoint system from Kunisch and Volkwein (Numer. Math. 90:117–148, 2001; SIAM J.

M. Hinze; S. Volkwein

2008-01-01

296

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.  

SciTech Connect

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

297

Sub-pixel accuracy motion estimation using linear approximate model of the error criterion function  

Microsoft Academic Search

Sub-pixel accuracy takes up a significant portion of the motion estimation with respect to the computational complexity of video coding. The error criterion function of motion estimation is well represented by a mathematical expression such as quadratic and linear model around the optimal point. Pre-computed error criterion values computed at full-pixel accuracy can be used to derive the motion vector

David Nyeongkyu Kwon; Pan Agathoklis; Peter Driessen

2005-01-01

298

Error estimates for MAC-like approximations to the linear Navier-Stokes equations  

Microsoft Academic Search

Summary In this paper a priori error estimates are derived for the discretization error which results when the linear Navier-Stokes equations are solved by a method which closely resembles the MAC-method of Harlow and Welch. General boundary conditions are permitted and the estimates are in terms of the discreteL2 norm. A solvability result is given which also applies to a

T. A. Porsching

1978-01-01

299

Energy Aware Sensor Group Scheduling to Minimise Estimated Error from Noisy Sensor Measurements  

Microsoft Academic Search

\\u000a In wireless sensor network applications, sensor measurements are corrupted by noise resulting from harsh environmental conditions,\\u000a hardware and transmission errors. Minimising the impact of noise in an energy constrained sensor network is a challenging\\u000a task. We study the problem of estimating environmental phenomena (e.g., temperature, humidity, pressure) based on noisy sensor\\u000a measurements to minimise the estimation error. An environmental phenomenon

Siddeswara Mayura Guru; Suhinthan Maheswararajah

300

Identification of Errors in 3D Building Models by a Robust Camera Pose Estimation  

NASA Astrophysics Data System (ADS)

This paper presents a method for identification of errors in 3D building models which are results of inaccurate creation process. Error detection is carried out within the camera pose estimation. As observations, parameters of the building corners and of the line segments detected in the image are used and conditions for the coplanarity of corresponding edges are defined. For the estimation, the uncertainty of the 3D building models and image features are taken into account.

Iwaszczuk, D.; Stilla, U.

2014-08-01

301

Anova-type consistent estimators of variance components in unbalanced multi-way error components models  

Microsoft Academic Search

This paper introduces three new Anova-type consistent estimators of variance components for use in multi-way unbalanced error components models, with possibly non-normal errors and endogenous regressors. They are easy to compute and are proved to be consistent under mild regularity conditions. For the first time proofs of consistency for Anova estimators are offered under such a general class of models,

Giovanni S. F. Bruno

2010-01-01

302

Error Estimates in Horocycle Averages Asymptotics: Challenges from String Theory  

E-print Network

There is an intriguing connection between the dynamics of the horocycle flow in the modular surface $SL_{2}(\\pmb{Z}) \\backslash SL_{2}(\\pmb{R})$ and the Riemann hypothesis. It appears in the error term for the asymptotic of the horocycle average of a modular function of rapid decay. We study whether similar results occur for a broader class of modular functions, including functions of polynomial growth, and of exponential growth at the cusp. Hints on their long horocycle average are derived by translating the horocycle flow dynamical problem in string theory language. Results are then proved by designing an unfolding trick involving a Theta series, related to the spectral Eisenstein series by Mellin integral transform. We discuss how the string theory point of view leads to an interesting open question, regarding the behavior of long horocycle averages of a certain class of automorphic forms of exponential growth at the cusp.

Matteo A. Cardella

2011-05-12

303

Asymptotic estimation error growth applied to monitoring. [Water pollution monitoring  

Microsoft Academic Search

Problems in the design of optimal measurement systems for monitoring pollutant levels in a confined aquifer are discussed. It is assumed that a two-dimensional region Z exists within an aquifer into which a collection of stochastic and deterministic point sources are injecting a pollutant species. It is required to design a monitoring system whereby best estimates of the pollutant concentration

Pimentel

1978-01-01

304

Distributed bounded-error parameter and state estimation  

E-print Network

to centralized estimation, where all data are collected to a central processing unit, here, each data surveillance). Now, many civilian applications (environment monitoring, home automation, traffic control) may antennas (for AOA). Contrary to TOA, TDOA or AOA data, readings of signal strength (RSS) at a given sensor

Paris-Sud XI, Université de

305

Gap filling strategies and error in estimating annual soil respiration  

Technology Transfer Automated Retrieval System (TEKTRAN)

Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

306

Strong spatial Lp error estimates in computing the  

E-print Network

Field following Chen [1994] Phase transition in solidification process u - u + (u3 - u)/ = w. Phase-field models of phase separation and geometric motions Rubinstein et al. [1989], Evans et al-life dendrite lab picture: A phase-field simulation: Omar Lakkis (Sussex) Strong Lp estimates in computing

Lakkis, Omar

307

Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates  

SciTech Connect

We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

2009-01-01

308

Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.  

SciTech Connect

An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.

Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.

2005-07-01

309

Identification and Estimation of Nonlinear Models Using Two Samples with Nonclassical Measurement Errors  

PubMed Central

This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest — the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates — is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach. PMID:20495685

Carroll, Raymond J.; Chen, Xiaohong; Hu, Yingyao

2010-01-01

310

On the estimation of radar rainfall error variance  

Microsoft Academic Search

One of the major problems in radar rainfall (RR) estimation is the lack of accurate reference data on area-averaged rainfall. Radar–raingauge (R–G) comparisons are commonly used to assess and to validate the radar algorithms, but large differences of the spatial resolution between raingauge and radar measurements prevent any straightforward interpretation of the results. We assume that the R–G difference variance

Grzegorz J. Ciach; Witold F. Krajewski

1999-01-01

311

A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.  

PubMed

The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion. PMID:22255940

Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco

2011-01-01

312

Uncertainty quantification in a chemical system using error estimate-based mesh adaption  

NASA Astrophysics Data System (ADS)

This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows.

Mathelin, Lionel; Le Maître, Olivier P.

2012-10-01

313

A posteriori error estimators for the discrete ordinates approximation of the one-speed neutron transport equation  

SciTech Connect

When calculating numerical solutions of the neutron transport equation it is important to have a measure of the accuracy of the solution. As the true solution is generally not known, a suitable estimation of the error must be made. The steady state transport equation possesses discretization errors in all its independent variables: angle, energy and space. In this work only spatial discretization errors are considered. An exact transport solution, in which the degree of regularity of the exact flux across the singular characteristic is controlled, is manufactured to determine the numerical solutions true discretization error. This solution is then projected onto a Legendre polynomial space in order to form an exact solution on the same basis space as the numerical solution, Discontinuous Galerkin Finite Element Method (DGFEM), to enable computation of the true error. Over a series of test problems the true error is compared to the error estimated by: Ragusa and Wang (RW), residual source (LER) and cell discontinuity estimators (JD). The validity and accuracy of the considered estimators are primarily assessed by considering the effectivity index and global L2 norm of the error. In general RW excels at approximating the true error distribution but usually under-estimates its magnitude; the LER estimator emulates the true error distribution but frequently over-estimates the magnitude of the true error; the JD estimator poorly captures the true error distribution and generally under-estimates the error about singular characteristics but over-estimates it elsewhere. (authors)

O'Brien, S.; Azmy, Y. Y. [North Carolina State University, Raleigh, NC 27695 (United States)

2013-07-01

314

Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.  

PubMed

Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval. PMID:24662512

Hubig, Michael; Muggenthaler, Holger; Mall, Gita

2014-05-01

315

Estimates of errors of a gyroscope stabilized platform  

NASA Astrophysics Data System (ADS)

A gyrostabilized platform has a four-frame cardan suspension in which one of the dynamically adjusted gyroscopes placed on the stabilized platform measures the angle of its deviation in the plane of the platform, while the second such gyroscope measures the deviation relative to this plane. The redundant first gyro can be used to correct the system and may also be a closed system itself. This paper studies the errors in the gyro stabilized platform due to the nonperpendicularity of the axes of the cardan suspension of the platform due to the nonperpendicularity of the axes of the cardan suspension of the platform as well as the disbalance of the components and dynamically adjustable gyroscopes. The cumbersome equations of motion for the system are written, neglecting dry frictional forces in the shafts of platform suspension, second order nonlinearities relative to the angular coordinates and their derivatives as well as terms with periodic coefficients which can affect the dynamics of the platform only in narrow ranges of frequency variations at parametric resonances.

Zbrutskiy, A. V.; Balabanov, I. V.

1984-08-01

316

Corrected-loss estimation for quantile regression with covariate measurement errors  

PubMed Central

We study estimation in quantile regression when covariates are measured with errors. Existing methods require stringent assumptions, such as spherically symmetric joint distribution of the regression and measurement error variables, or linearity of all quantile functions, which restrict model flexibility and complicate computation. In this paper, we develop a new estimation approach based on corrected scores to account for a class of covariate measurement errors in quantile regression. The proposed method is simple to implement. Its validity requires only linearity of the particular quantile function of interest, and it requires no parametric assumptions on the regression error distributions. Finite-sample results demonstrate that the proposed estimators are more efficient than the existing methods in various models considered. PMID:23843665

Wang, Huixia Judy; Stefanski, Leonard A.; Zhu, Zhongyi

2012-01-01

317

Systematic Review and Harmonization of Life Cycle GHG Emission Estimates for Electricity Generation Technologies (Presentation)  

SciTech Connect

This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.

Heath, G.

2012-06-01

318

Efficient Small Area Estimation in the Presence of Measurement Error in Covariates  

E-print Network

for this purpose. In this dissertation, each project describes a model used for small area estimation in which the covariates are measured with error. We applied different methods of bias correction to improve the estimates of the parameter of interest in the small...

Singh, Trijya

2012-10-19

319

Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28  

ERIC Educational Resources Information Center

Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

Guo, Hongwen; Sinharay, Sandip

2011-01-01

320

Reasons for software effort estimation error: impact of respondent role, information  

E-print Network

of the estimate, Brief description... After the project completed » Actual effort time, Reasons for high or lowReasons for software effort estimation error: impact of respondent role, information collection approach, and data analysis method Magne Jørgensen and Kjetil Moløkken-Østvold IEEE TRANSACTIONS

Bae, Doo-Hwan

321

Toward Understanding and Reducing Errors in Real-Time Estimation of Travel Times Sirisha M. Kothuri  

E-print Network

Toward Understanding and Reducing Errors in Real-Time Estimation of Travel Times Sirisha M. Kothuri traveler information to the public. Many states as well as private contractors are providing real-time travel time estimates to commuters to help improve the quality and efficiency of their trips. Accuracy

Bertini, Robert L.

322

A posteriori error estimations of a SUPG method for anisotropic diffusion–convection–reaction problems  

Microsoft Academic Search

This Note presents an a posteriori residual error estimator for diffusion–convection–reaction problems approximated by a SUPG scheme on isotropic or anisotropic meshes in Rd, d=2 or 3. This estimator is based on the jump of the flux and the interior residual of the approximated solution. It is constructed to work on anisotropic meshes which account for the eventual anisotropic behavior

Thomas Apel; Serge Nicaise

2007-01-01

323

Error backpropagation neural calibration and Kalman filter position estimation for touch panels  

Microsoft Academic Search

This work develops a methodology and technique for calibration and dynamic touching position estimation of touch panels using error backpropagation neural networks (EBPNN) and Kalman filter. A neural-based calibration method is presented to determine the nonlinear mapping relationships of the measured and known touch points, and then calibrate their positions in a real-time manner. In order to obtain position estimation

Chih-Chang Lai; Ching-Chih Tsai; Han-Chang Lin

2004-01-01

324

Evaluation of the systematic error in using 3D dose calculation in scanning beam proton therapy for lung cancer  

PubMed Central

The objective of this study was to evaluate and understand the systematic error between the planned three-dimensional (3D) dose and the delivered dose to patient in scanning beam proton therapy for lung tumors. Single-field and multi-field optimized scanning beam proton therapy plans were generated for 10 patients with stage II–III lung cancer with a mix of tumor motion and size. 3D doses in CT data sets for different respiratory phases and the time weighted average CT, as well as the four-dimensional (4D) doses were computed for both plans. The 3D and 4D dose differences for the targets and different organs at risk were compared using dose volume histogram (DVH) and voxel-based techniques and correlated with the extent of tumor motion. The gross tumor volume (GTV) dose was maintained in all 3D and 4D doses using the internal GTV override technique. The DVH and voxel-based techniques are highly correlated. The mean dose error and the standard deviation of dose error for all target volumes were both less than 1.5% for all but one patient. However, the point dose difference between the 3D and 4D doses was up to 6% for the GTV and greater than 10% for the clinical and planning target volumes. Changes in the 4D and 3D doses were not correlated with tumor motion. The planning technique (single-field or multi-field optimized) did not affect the observed systematic error. In conclusion, the dose error in 3D dose calculation varies from patient to patient and does not correlate with lung tumor motion. Therefore, patient-specific evaluation of the 4D dose is important for scanning beam proton therapy for lung tumors. PMID:25207565

Li, Heng; Liu, Wei; Park, Peter; Matney, Jason; Liao, Zhongxing; Chang, Joe; Zhang, Xiaodong; Li, Yupeng; Zhu, Ronald X

2014-01-01

325

Regional estimation of groundwater arsenic concentrations through systematical dynamic-neural modeling  

NASA Astrophysics Data System (ADS)

Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 ?g l-1 for training; 106.13 ?g l-1 for validation) outperforms the BPNN (RMSE: 121.54 ?g l-1 for training; 143.37 ?g l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 ?g l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.

Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

2013-08-01

326

Patients' willingness and ability to participate actively in the reduction of clinical errors: a systematic literature review.  

PubMed

This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799

Doherty, Carole; Stavropoulou, Charitini

2012-07-01

327

Solving large tomographic linear systems: size reduction and error estimation  

NASA Astrophysics Data System (ADS)

We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.

Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust

2014-10-01

328

Robust Henderson III Estimators of Variance Components in the Nested Error Model  

Microsoft Academic Search

\\u000a This work deals with robust estimation of variance components under a nested error model. Traditional estimation methods include\\u000a Maximum Likelihood (ML), Restricted Maximum Likelihood (REML) and Henderson method III (H3). However, when outliers are present,\\u000a these methods deliver estimators with poor properties. Robust modifications of the ML and REML have been proposed in the literature\\u000a (see for example, Fellner [3],

Betsabé Pérez; Daniel Peña; Isabel Molina

2011-01-01

329

Eigenvector method for maximum-likelihood estimation of phase errors in synthetic-aperture-radar imagery  

Microsoft Academic Search

We develop a maximum-likelihood (ML) algorithm for estimation and correction (autofocus) of phase errors induced in synthetic-aperture-radar (SAR) imagery. Here, M pulse vectors in the range-compressed domain are used as input for simultaneously estimating M[minus]1 phase values across the aperture. The solution involves an eigenvector of the sample covariance matrix of the range-compressed data. The estimator is then used within

Charles V. Jakowatz Jr.; Daniel E. Wahl

1993-01-01

330

Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system  

NASA Astrophysics Data System (ADS)

A post-processor that accounts for the hydrologic uncertainty in a probabilistic streamflow forecast system is necessary to account for the uncertainty introduced by the hydrological model. In this study different variants of an autoregressive error model that can be used as a post-processor for short to medium range streamflow forecasts, are evaluated. The deterministic HBV model is used to form the basis for the streamflow forecast. The general structure of the error models then used as post-processor is a first order autoregressive model of the form dt = ?dt-1 + ??t where dt is the model error (observed minus simulated streamflow) at time t, ? and ? are the parameters of the error model, and ?t is the residual error described through a probability distribution. The following aspects are investigated: (1) Use of constant parameters ? and ? versus the use of state dependent parameters. The state dependent parameters vary depending on the states of temperature, precipitation, snow water equivalent and simulated streamflow. (2) Use of a Standard Normal distribution for ?t versus use of an empirical distribution function constituted through the normalized residuals of the error model in the calibration period. (3) Comparison of two different transformations, i.e. logarithmic versus square root, that are applied to the streamflow data before the error model is applied. The reason for applying a transformation is to make the residuals of the error model homoscedastic over the range of streamflow values of different magnitudes. Through combination of these three characteristics, eight variants of the autoregressive post-processor are generated. These are calibrated and validated in 55 catchments throughout Norway. The discrete ranked probability score with 99 flow percentiles as standardized thresholds is used for evaluation. In addition, a non-parametric bootstrap is used to construct confidence intervals and evaluate the significance of the results. The main findings of the study are: (1) Error models with state dependent parameters perform significantly better than corresponding models with constant parameters. (2) Error models using empirical distribution functions perform significantly better than corresponding models using a Standard Normal distribution. (3) For error models with constant parameters, those with logarithmic transformation perform significantly better than those with square root transformation. However, for models with state dependent parameters, this significance disappears and there is no difference in the performance of the logarithmic versus the square root transformation. The explanation is found in the flexibility that is introduced with the state dependent parameters which can account for and alleviate the more non-homoscedastic behaviour that is found for the square root transformation. The findings are derived from the application of the error models to Norwegian catchments and with the HBV model as the deterministic rainfall runoff model. However, it is anticipated that similar findings can be made in other regions and with other rainfall runoff models. Thus, the findings provide guidelines on how to construct autoregressive error models as post-processors in probabilistic streamflow forecast systems. In addition, the study gives an example on the application of bootstrap to test the significance of differences of the forecast evaluation measures for continuous probabilistic forecasts.

Morawietz, Martin; Xu, Chong-Yu; Gottschalk, Lars; Tallaksen, Lena

2010-05-01

331

Systematic evaluation of autoregressive error models as post-processors for a probabilistic streamflow forecast system  

NASA Astrophysics Data System (ADS)

SummaryIn this study, different versions of autoregressive error models are evaluated as post-processors for probabilistic streamflow forecasts. The post-processors account for hydrologic uncertainties that are introduced by the precipitation-runoff model. The post-processors are evaluated with the discrete ranked probability score (DRPS), and a non-parametric bootstrap is applied to investigate the significance of differences in model performance. The results show that differences in performance between most model versions are significant. For these cases it is found that (1) error models with state dependent parameters perform better than those with constant parameters, (2) error models with an empirical distribution for the description of the standardized residuals perform better than those with a normal distribution, and (3) procedures that use a logarithmic transformation of the original streamflow values perform better than those that use a square root transformation.

Morawietz, Martin; Xu, Chong-Yu; Gottschalk, Lars; Tallaksen, Lena M.

2011-09-01

332

ENERGY NORM A POSTERIORI ERROR ESTIMATION OF HP ADAPTIVE DISCONTINUOUS GALERKIN METHODS FOR ELLIPTIC PROBLEMS  

Microsoft Academic Search

In this paper, we develop the a posteriori error estimation of hp-version interior penalty discontinuous Galerkin discretizations of elliptic boundary-value problems. Computable upper and lower bounds on the error measured in terms of a natural (mesh-dependent) energy norm are derived. The bounds are explicit in the local mesh sizes and approximation orders. A series of numerical experiments illustrate the performance

PAUL HOUSTON; THOMAS P. WIHLER

333

Are interventions to reduce interruptions and errors during medication administration effective?: a systematic review  

PubMed Central

Background Medication administration errors are frequent and lead to patient harm. Interruptions during medication administration have been implicated as a potential contributory factor. Objective To assess evidence of the effectiveness of interventions aimed at reducing interruptions during medication administration on interruption and medication administration error rates. Methods In September 2012 we searched MEDLINE, EMBASE, CINAHL, PsycINFO, Cochrane Effective Practice and Organisation of Care Group reviews, Google and Google Scholar, and hand searched references of included articles. Intervention studies reporting quantitative data based on direct observations of at least one outcome (interruptions, or medication administration errors) were included. Results Ten studies, eight from North America and two from Europe, met the inclusion criteria. Five measured significant changes in interruption rates pre and post interventions. Four found a significant reduction and one an increase. Three studies measured changes in medication administration error rates and showed reductions, but all implemented multiple interventions beyond those targeted at reducing interruptions. No study used a controlled design pre and post. Definitions for key outcome indicators were reported in only four studies. Only one study reported ? scores for inter-rater reliability and none of the multi-ward studies accounted for clustering in their analyses. Conclusions There is weak evidence of the effectiveness of interventions to significantly reduce interruption rates and very limited evidence of their effectiveness to reduce medication administration errors. Policy makers should proceed with great caution in implementing such interventions until controlled trials confirm their value. Research is also required to better understand the complex relationship between interruptions and error to support intervention design. PMID:23980188

Raban, Magdalena Z; Westbrook, Johanna I

2014-01-01

334

Identification of a previously undetected source of systematic error in capillary viscometry measurements.  

PubMed

A new analysis of the factors affecting the driving head within a capillary viscometer has identified a previously undetected source of error due to surface tension. In contrast to the well-known capillary rise correction to the effective driving head, this additional effect requires that a correction be made to the apparent head of liquid. Calculations show that this source of error may be responsible for deviations in experimental results as large as 1%. A new calibration equation which compensates for both surface tension effects is recommended. PMID:18699345

Wedlake, G D; Vera, J H; Ratcliff, G A

1979-01-01

335

On the measurement of structure-borne sound energy flow along pipes. Part 3: Analysis of systematic and random errors  

NASA Astrophysics Data System (ADS)

Errors in methods based on finite difference approximations and using cross spectral densities of accelerations at closely spaced pipe cross sections for measuring structure-borne energy flow in thin walled circular cylindrical pipes (by separating the contributions of longitudinal torsional and bending waves) are analyzed. A quantity called direct field ratio, that defines the ratio between the measured energy flow and the energy flow that would be measured in a free propagating wave under noise-free conditions is introduced. It indicates conditions where the estimates of the cross spectral density are sensitive to several error sources.

Verheij, J. W.; Vanruiten, C. J. M.

1983-04-01

336

On integrating direct methods and isomorphous-replacement techniques: triplet estimation and treatment of errors.  

PubMed

The method of joint probability distribution functions has been generalized in order to include and treat different sources of error. The probability distributions of the isomorphous pairs (E(p), E(d)) and of the two triples (E(ph), E(pk), E(ph+k), E(dh), E(dk), E(dh+k)) are obtained, on the assumption that the lack of isomorphism and the errors in measurements cumulate on the E(d) variables. The conditional distributions of the two-phase and the three-phase structure invariants are derived, showing how the reliability of the probabilistic estimates depends on the errors. PMID:11526306

Giacovazzo, C; Siliqi, D; García-Rodríguez, L

2001-09-01

337

Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries  

NASA Technical Reports Server (NTRS)

This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

2002-01-01

338

Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps  

SciTech Connect

The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.

1996-01-04

339

Calibration and Systematic Error Analysis For the COBE-DMR Four-Year Sky Maps  

E-print Network

The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 uK per 7 deg field of view. The absolute calibration is determined to 0.7% with drifts smaller than 0.2% per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to the Earth's magnetic field, microwave emission from the Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95% confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 uK in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

A. Kogut; A. J. Banday; C. L. Bennett; K. M. Gorski; G. Hinshaw; P. D. Jackson; P. Keegstra; C. Lineweaver; G. F. Smoot; L. Tenorio; E. L. Wright

1996-01-12

340

Cluster Monte Carlo: tScaling of systematic errors in the two-dimensional Ising model  

NASA Astrophysics Data System (ADS)

We present an extensive analysis of systematic deviations in Wolff cluster simulations of the critical Ising model, using random numbers generated by binary shift registers. We investigate how these deviations depend on the lattice size, the shift-register length, and the number of bits correlated by the production rule. They appear to satisfy scaling relations.

Shchur, Lev N.; Blöautte, Henk W. J.

1997-05-01

341

Biases in atmospheric CO2 estimates from correlated meteorology modeling errors  

NASA Astrophysics Data System (ADS)

Estimates of CO2 fluxes that are based on atmospheric measurements rely upon a meteorology model to simulate atmospheric transport. These models provide a quantitative link between the surface fluxes and CO2 measurements taken downwind. Errors in the meteorology can therefore cause errors in the estimated CO2 fluxes. Meteorology errors that correlate or covary across time and/or space are particularly worrisome; they can cause biases in modeled atmospheric CO2 that are easily confused with the CO2 signal from surface fluxes, and they are difficult to characterize. In this paper, we leverage an ensemble of global meteorology model outputs combined with a data assimilation system to estimate these biases in modeled atmospheric CO2. In one case study, we estimate the magnitude of month-long CO2 biases relative to CO2 boundary layer enhancements and quantify how that answer changes if we either include or remove error correlations or covariances. In a second case study, we investigate which meteorological conditions are associated with these CO2 biases. In the first case study, we estimate uncertainties of 0.5-7 ppm in monthly-averaged CO2 concentrations, depending upon location (95% confidence interval). These uncertainties correspond to 13-150% of the mean afternoon CO2 boundary layer enhancement at individual observation sites. When we remove error covariances, however, this range drops to 2-22%. Top-down studies that ignore these covariances could therefore underestimate the uncertainties and/or propagate transport errors into the flux estimate. In the second case study, we find that these month-long errors in atmospheric transport are anti-correlated with temperature and planetary boundary layer (PBL) height over terrestrial regions. In marine environments, by contrast, these errors are more strongly associated with weak zonal winds. Many errors, however, are not correlated with a single meteorological parameter, suggesting that a single meteorological proxy is not sufficient to characterize uncertainties in atmospheric CO2. Together, these two case studies provide information to improve the setup of future top-down inverse modeling studies, preventing unforeseen biases in estimated CO2 fluxes.

Miller, S. M.; Hayek, M. N.; Andrews, A. E.; Fung, I.; Liu, J.

2015-03-01

342

Identification of a previously undetected source of systematic error in capillary viscometry measurements  

Microsoft Academic Search

A new analysis of the factors affecting the driving head within a capillary viscometer has identified a previously undetected source of error due to surface tension. In contrast to the well-known capillary rise correction to the effective driving head, this additional effect requires that a correction be made to the apparent head of liquid. Calculations show that this source of

G. D. Wedlake; J. H. Vera; G. A. Ratcliff

1979-01-01

343

Systematic Errors in Stereo PIV When Imaging through a Glass Window  

NASA Technical Reports Server (NTRS)

This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

Green, Richard; McAlister, Kenneth W.

2004-01-01

344

Systematic error correction of a 3D laser scanning measurement device  

Microsoft Academic Search

Non-contact measurement techniques using laser scanning have the advantage of fast acquiring large numbers of points. However, compared to their contact-based counterparts, these techniques are known to be less accurate. The work presented in this paper aims at improving the accuracy of these techniques through an error correction procedure based on an experimental process that concerns mechanical parts. The influence

A. Isheil; J.-P. Gonnet; D. Joannic; J.-F. Fontaine

2011-01-01

345

Least squares support vector machines for direction of arrival estimation with error control and validation.  

SciTech Connect

The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.

Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew

2003-02-01

346

A Systematic and Efficient Method to Estimate the Vibrational Frequencies of Linear Peptide  

E-print Network

estimate the vibrational frequency sets of linear peptide and protein ions with any amino acid sequence of biological molecules such as proteins, nucleic acids, and carbohydrates [9­13]. To test whether the paradigmsA Systematic and Efficient Method to Estimate the Vibrational Frequencies of Linear Peptide

Kim, Myung Soo

347

A note on bias and mean squared error in steady-state quantile estimation  

NASA Astrophysics Data System (ADS)

When using a batch means methodology for estimation of a nonlinear function of a steady-state mean from the output of simulation experiments, it has been shown that a jackknife estimator may reduce the bias and mean squared error (mse) compared to the classical estimator, whereas the average of the classical estimators from the batches (the batch means estimator) has a worse performance from the point of view of bias and mse. In this paper we show that, under reasonable assumptions, the performance of the jackknife, classical and batch means estimators for the estimation of quantiles of the steady-state distribution exhibit similar properties as in the case of the estimation of a nonlinear function of a steady-state mean. We present some experimental results from the simulation of the waiting time in queue for an M/M/1 system under heavy traffic.

Muñoz, David F.; Ramírez-López, Adán

2013-10-01

348

Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System  

NASA Technical Reports Server (NTRS)

Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

2009-01-01

349

Systematic reduction of sign errors in many-body calculations of atoms and molecules  

SciTech Connect

The self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B {\\bf 79} 195117 (2009), {\\it ibid.} {\\bf 80} 125110 (2009)] is applied to the calculation of ground state states of atoms and molecules. By direct comparison with accurate configuration interaction results we show that applying the SHDMC method to the oxygen atom leads to systematic convergence towards the exact ground state wave function. We present results for the small but challenging N$_2$ molecule, where results obtained via the energy minimization method and SHDMC are within experimental accuracy of 0.08 eV. Moreover, we demonstrate that the algorithm is robust enough to be used for the calculations of systems at least as large as C$_{20}$ starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the fermion sign problem in electronic structure calculations.

Bajdich, Michal [ORNL; Tiago, Murilo L [ORNL; Hood, Randolph Q. [Lawrence Livermore National Laboratory (LLNL); Kent, Paul R [ORNL; Reboredo, Fernando A [ORNL

2010-01-01

350

Dust-Induced Systematic Errors in UV-Derived Star Formation Rates  

E-print Network

Rest-frame far-ultraviolet (FUV) luminosities form the `backbone' of our understanding of star formation at all cosmic epochs. FUV luminosities are typically corrected for dust by assuming that extinction indicators which have been calibrated for local starbursting galaxies apply to all star-forming galaxies. I present evidence that `normal' star-forming galaxies have systematically redder UV/optical colors than starbursting galaxies at a given FUV extinction. This is attributed to differences in star/dust geometry, coupled with a small contribution from older stellar populations. Folding in data for starbursts and ultra-luminous infrared galaxies, I conclude that SF rates from rest-frame UV and optical data alone are subject to large (factors of at least a few) systematic uncertainties because of dust, which cannot be reliably corrected for using only UV/optical diagnostics.

Eric F. Bell

2002-07-18

351

Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect  

NASA Technical Reports Server (NTRS)

Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

1998-01-01

352

Certainty in Heisenberg's uncertainty principle: Revisiting definitions for estimation errors and disturbance  

NASA Astrophysics Data System (ADS)

We revisit the definitions of error and disturbance recently used in error-disturbance inequalities derived by Ozawa and others by expressing them in the reduced system space. The interpretation of the definitions as mean-squared deviations relies on an implicit assumption that is generally incompatible with the Bell-Kochen-Specker-Spekkens contextuality theorems, and which results in averaging the deviations over a non-positive-semidefinite joint quasiprobability distribution. For unbiased measurements, the error admits a concrete interpretation as the dispersion in the estimation of the mean induced by the measurement ambiguity. We demonstrate how to directly measure not only this dispersion but also every observable moment with the same experimental data, and thus demonstrate that perfect distributional estimations can have nonzero error according to this measure. We conclude that the inequalities using these definitions do not capture the spirit of Heisenberg's eponymous inequality, but do indicate a qualitatively different relationship between dispersion and disturbance that is appropriate for ensembles being probed by all outcomes of an apparatus. To reconnect with the discussion of Heisenberg, we suggest alternative definitions of error and disturbance that are intrinsic to a single apparatus outcome. These definitions naturally involve the retrodictive and interdictive states for that outcome, and produce complementarity and error-disturbance inequalities that have the same form as the traditional Heisenberg relation.

Dressel, Justin; Nori, Franco

2014-02-01

353

Practical error estimation in zoom-in and truncated tomography reconstructions  

SciTech Connect

Synchrotron-based microtomography provides high resolution, but the resolution in large samples is often limited by the detector field of view and the pixel size. For some samples, only a small region of interest is relevant and local tomography is a powerful approach for retaining high resolution. Two methods are truncated tomography and zoom-in tomography. In this article we use existing theoretical results to estimate the error present in truncated and zoom-in tomographic reconstructions. These errors agree with the errors calculated from exact tomographic reconstructions. We argue in a heuristic manner why zoom-in tomography is superior to the truncated tomography in terms of the reconstruction error. However, the theoretical formula is not usable in practice because it requires the complete high-resolution reconstruction to be known. To solve this problem we proposed a practical method for estimating the error in zoom-in and truncated tomographies. The results using this estimation method are in very good agreement with our experimental results.

Xiao Xianghui; De Carlo, Francesco; Stock, Stuart [Advanced Photon Source, X-ray Science Division, Argonne National Laboratory, Argonne, Illinois 60439 (United States); Department of Molecular Pharmacology and Biological Chemistry, Feinberg School of Medicine, Northwestern University, Chicago, Illinois 60611-3008 (United States)

2007-06-15

354

A posteriori error estimates, stopping criteria, and adaptivity for multiphase compositional Darcy flows in porous media  

NASA Astrophysics Data System (ADS)

In this paper we derive a posteriori error estimates for the compositional model of multiphase Darcy flow in porous media, consisting of a system of strongly coupled nonlinear unsteady partial differential and algebraic equations. We show how to control the dual norm of the residual augmented by a nonconformity evaluation term by fully computable estimators. We then decompose the estimators into the space, time, linearization, and algebraic error components. This allows to formulate criteria for stopping the iterative algebraic solver and the iterative linearization solver when the corresponding error components do not affect significantly the overall error. Moreover, the spatial and temporal error components can be balanced by time step and space mesh adaptation. Our analysis applies to a broad class of standard numerical methods, and is independent of the linearization and of the iterative algebraic solvers employed. We exemplify it for the two-point finite volume method with fully implicit Euler time stepping, the Newton linearization, and the GMRes algebraic solver. Numerical results on two real-life reservoir engineering examples confirm that significant computational gains can be achieved thanks to our adaptive stopping criteria, already on fixed meshes, without any noticeable loss of precision.

Di Pietro, Daniele A.; Flauraud, Eric; Vohralík, Martin; Yousef, Soleiman

2014-11-01

355

On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)  

NASA Astrophysics Data System (ADS)

Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.

Huffman, G. J.

2013-12-01

356

Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty  

NASA Astrophysics Data System (ADS)

Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 ? error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 ? errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.

Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

2014-10-01

357

Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval  

NASA Technical Reports Server (NTRS)

Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We validate our estimated smoothing errors by comparing the SBUV ozone profiles with other ozone profiling sensors.

Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

2011-01-01

358

REITERATIVE MINIMUM MEAN SQUARE ERROR ESTIMATOR FOR DIRECTION OF ARRIVAL ESTIMATION AND BIOMEDICAL FUNCTIONAL BRAIN IMAGING  

E-print Network

Two novel approaches are developed for direction-of-arrival (DOA) estimation and functional brain imaging estimation, which are denoted as ReIterative Super-Resolution (RISR) and Source AFFine Image REconstruction (SAFFIRE), ...

Chan, Tsz Ping

2008-07-25

359

A posteriori error estimates for the Electric Field Integral Equation on polyhedra  

E-print Network

We present a residual-based a posteriori error estimate for the Electric Field Integral Equation (EFIE) on a bounded polyhedron. The EFIE is a variational equation formulated in a negative order Sobolev space on the surface of the polyhedron. We express the estimate in terms of square-integrable and thus computable quantities and derive global lower and upper bounds (up to oscillation terms).

Nochetto, Ricardo H

2012-01-01

360

Using Stopping Rules to Bound the Mean Integrated Squared Error in Density Estimation  

Microsoft Academic Search

Suppose $X_1,X_2,\\\\ldots,X_n$ are i.i.d. with unknown density $f$. There is a well-known expression for the asymptotic mean integrated squared error (MISE) in estimating $f$ by a kernel estimate $\\\\hat{f}_n$, under certain conditions on $f$, the kernel and the bandwidth. Suppose that one would like to choose a sample size so that the MISE is smaller than some preassigned positive number

Adam T. Martinsek

1992-01-01

361

The accuracy of incoherent scatter measurements: error estimates valid for high signal levels  

Microsoft Academic Search

We study incoherent scatter data errors with special emphasis on the situation with good signal-to-noise ratio. Because part of the statistical fluctuations in the autocorrelation function estimates arise from the signal itself, there is good reason to believe that these fluctuations may show significant correlations between different lags and\\/or between different ranges. We derive formulae suitable for the estimation of

A. Huuskonen; M. S. Lehtinen

1996-01-01

362

Multiscale Systematic Error Correction via Wavelet-Based Bandsplitting in Kepler Data  

NASA Astrophysics Data System (ADS)

The previous presearch data conditioning algorithm, PDC-MAP, for the Kepler data processing pipeline performs very well for the majority of targets in the Kepler field of view. However, for an appreciable minority, PDC-MAP has its limitations. To further minimize the number of targets for which PDC-MAP fails to perform admirably, we have developed a new method, called multiscale MAP, or msMAP. Utilizing an overcomplete discrete wavelet transform, the new method divides each light curve into multiple channels, or bands. The light curves in each band are then corrected separately, thereby allowing for a better separation of characteristic signals and improved removal of the systematics.

Stumpe, Martin C.; Smith, Jeffrey C.; Catanzarite, Joseph H.; Van Cleve, Jeffrey E.; Jenkins, Jon M.; Twicken, Joseph D.; Girouard, Forrest R.

2014-01-01

363

Stacked Weak Lensing Mass Calibration: Estimators, Systematics, and Impact on Cosmological Parameter Constraints  

SciTech Connect

When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.

Rozo, Eduardo; /U. Chicago /Chicago U., KICP; Wu, Hao-Yi; /KIPAC, Menlo Park; Schmidt, Fabian; /Caltech

2011-11-04

364

Error estimations for source inversions in seismology and Rivera, L.(1)  

E-print Network

Error estimations for source inversions in seismology and geodesy Rivera, L.(1) , Duputel, Z.(1.rivera@unistra.fr (2) DPRI, Kyoto University, Kyoto, Japan email fukahata@rcep.dpri.kyoto-u.ac.jp (3) Seismological used tool. It comes in very diverse flavors depending on the nature of data (e.g. seismological

Duputel, Zacharie

365

Estimating random errors due to shot noise in backscatter lidar observations  

Microsoft Academic Search

We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both

Zhaoyan Liu; William Hunt; Mark Vaughan; Chris Hostetler; Matthew McGill; Kathleen Powell; David Winker; Yongxiang Hu

2006-01-01

366

Error estimates for Raviart-Thomas interpolation of any order on anisotropic tetrahedra  

E-print Network

Error estimates for Raviart-Thomas interpolation of any order on anisotropic tetrahedra G. Acosta1 condition and the regular vertex property, for tetrahedra. Our techniques are different from those used , for triangles or tetrahedra, where 0 j k and 1 p . These results are new even in the two dimensional case

Duran, Ricardo

367

Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method  

ERIC Educational Resources Information Center

A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

Liu, Yuming; Schulz, E. Matthew; Yu, Lei

2008-01-01

368

Error estimation for reconstruction of neuronal spike firing from fast calcium imaging  

PubMed Central

Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Calcium transients reflect neuronal spike patterns allowing for spike train reconstructed from calcium traces. The key to judging spiking train authenticity is error estimation. However, due to the lack of an appropriate mathematical model to adequately describe this spike-calcium relationship, little attention has been paid to quantifying error ranges of the reconstructed spike results. By turning attention to the data characteristics close to the reconstruction rather than to a complex mathematic model, we have provided an error estimation method for the reconstructed neuronal spiking from calcium imaging. Real false-negative and false-positive rates of 10 experimental Ca2+ traces were within the estimated error ranges and confirmed that this evaluation method was effective. Estimation performance of the reconstruction of spikes from calcium transients within a neuronal population demonstrated a reasonable evaluation of the reconstructed spikes without having real electrical signals. These results suggest that our method might be valuable for the quantification of research based on reconstructed neuronal activity, such as to affirm communication between different neurons.

Liu, Xiuli; Lv, Xiaohua; Quan, Tingwei; Zeng, Shaoqun

2015-01-01

369

Estimating MEMS Gyroscope G-Sensitivity Errors in Foot Mounted Navigation  

E-print Network

Estimating MEMS Gyroscope G-Sensitivity Errors in Foot Mounted Navigation Jared B. Bancroft are often overlooked in foot mounted navigation systems. Accelerations of foot mounted IMUs can reach 5 g-g-sensitivity; linear acceleration effect on gyros; pedestrian navigation; foot mounted sensors; GPS denied navigation

Calgary, University of

370

A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series  

ERIC Educational Resources Information Center

Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

2011-01-01

371

The Estimation of Prediction Error: Covariance Penalties and Cross-Validation  

Microsoft Academic Search

Having constructed a data-based estimation rule, perhaps a logistic regression or a clas- sication tree, the statistician would like to know its performance as a predictor of future cases. There are two main theories concerning prediction error: (1) penalty methods such as Cp, AIC, and SURE that depend on the covariance between data points and their correspond- ing predictions; (2)

Bradley Efron

2004-01-01

372

Stability and error analysis of the polarization estimation inverse problem for solid oxide fuel cells.  

E-print Network

describe the performance of a solid oxide fuel cell requires the solution of an inverse problem. Two at the electrodeelectrolyte interfaces of solid oxide fuel cells (SOFC) is investigated physically using ElectrochemicalStability and error analysis of the polarization estimation inverse problem for solid oxide fuel

Renaut, Rosemary

373

Most likely paths to error when estimating the mean of a reflected random walk  

E-print Network

Most likely paths to error when estimating the mean of a reflected random walk Ken R. Duffya, Sean. Duffy), meyn@illinois.edu (Sean P. Meyn) Preprint submitted to Elsevier June 2, 2010 #12;2 1's recursion also governs the evolution of the queue-length at certain single server queues, such as the M/M/1

Duffy, Ken

374

Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir Welding  

E-print Network

Error Estimation And Accurate Mapping Based ALE Formulation For 3D Simulation Of Friction Stir, France * corresponding author: lionel.fourment@ensmp.fr Abstract. An Arbitrary Lagrangian Eulerian (ALE standard approaches. The proposed ALE formulation is applied to FSW simulation. Steady state welding

Paris-Sud XI, Université de

375

Mapping the Origins of Time: Scalar Errors in Infant Time Estimation  

ERIC Educational Resources Information Center

Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

2014-01-01

376

An Improved Sequential Procedure for Estimating the Regression Parameter in Regression Models with Symmetric Errors  

Microsoft Academic Search

A sequential procedure for estimating the regression parameter $\\\\beta \\\\in R^k$ in a regression model with symmetric errors is proposed. This procedure is shown to have asymptotically smaller regret than the procedure analyzed by Martinsek when $\\\\mathbf{\\\\beta} = \\\\mathbf{0}$, and the same asymptotic regret as that procedure when $\\\\mathbf{\\\\beta} \\\

T. N. Sriram

1992-01-01

377

Quadratic Zeeman effect for hydrogen: A method for rigorous bound-state error estimates  

Microsoft Academic Search

We present a variational method, based on direct minimization of energy, for the calculation of eigenvalues and eigenfunctions of a hydrogen atom in a strong uniform magnetic field in the framework of the nonrelativistic theory (quadratic Zeeman effect). Using semiparabolic coordinates and a harmonic-oscillator basis, we show that it is possible to give rigorous error estimates for both eigenvalues and

G. Fonte; P. Falsaperla; G. Schiffrer; D. Stanzial

1990-01-01

378

Speech enhancement using a minimum mean-square error short-time spectral modulation magnitude estimator  

E-print Network

Speech enhancement using a minimum mean-square error short-time spectral modulation magnitude In this paper we investigate the enhancement of speech by applying MMSE short-time spectral magnitude estimation on the quality of enhanced speech, and find that this method works better with speech uncertainty. Finally we

379

Speech Enhancement Based on Minimum Mean-Square Error Estimation and Supergaussian Priors  

Microsoft Academic Search

This paper presents a class of minimum mean-square error (MMSE) estimators for enhancing short-time spectral coefficients of a noisy speech signal. In contrast to most of the presently used methods, we do not assume that the spectral coefficients of the noise or of the clean speech signal obey a (complex) Gaussian probability density. We derive analytical solutions to the problem

Rainer Martin

2005-01-01

380

Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator  

Microsoft Academic Search

This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the \\

Y. Ephraim; D. Malah

1984-01-01

381

A Time--Dependent Born--Oppenheimer Approximation with Exponentially Small Error Estimates  

E-print Network

A Time--Dependent Born--Oppenheimer Approximation with Exponentially Small Error Estimates George A accurate time--dependent Born-- Oppenheimer approximation for molecular quantum mechanics. We study is the usual Born--Oppenheimer expansion parameter ffl, where ffl 4 is the ratio of the electron mass divided

382

A Time--Dependent Born--Oppenheimer Approximation with Exponentially Small Error Estimates  

E-print Network

A Time--Dependent Born--Oppenheimer Approximation with Exponentially Small Error Estimates George A--dependent Born-- Oppenheimer approximation for molecular quantum mechanics. We study molecular systems whose parameter that governs the approximation is the usual Born--Oppenheimer expansion parameter #, where # 4

Hagedorn, George A.

383

A Time{Dependent Born{Oppenheimer Approximation with Exponentially Small Error Estimates  

E-print Network

A Time{Dependent Born{Oppenheimer Approximation with Exponentially Small Error Estimates George A{ Oppenheimer approximation for molecular quantum mechanics. We study molecular systems whose electron masses. The small parameter that governs the approximation is the usual Born{Oppenheimer expansion parameter #15

Joye, Alain

384

A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates  

E-print Network

A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer that governs the approximation is the usual Born--Oppenheimer parameter #, where # 4 is the electron mass

385

A Posteriori Error Estimation and Adaptive Computation of Viscoelastic Fluid Flow  

E-print Network

of viscoelastic fluid flows governed by differential constitutive laws of Giesekus and Oldroyd-B type. We use quasi-Newtonian Stokes flows [11] and for a nonlinear three-field problem arising from Oldroyd-BA Posteriori Error Estimation and Adaptive Computation of Viscoelastic Fluid Flow Vincent J. Ervin

Ervin, Vincent J.

386

Simple Error Estimators for the Galerkin BEM for some Hypersingular Integral Equation in  

E-print Network

of the boundary of a bounded Lipschitz domain R2 . Here, W denotes the hypersingular integral operator whichSimple Error Estimators for the Galerkin BEM for some Hypersingular Integral Equation in 2D FOR SOME HYPERSINGULAR INTEGRAL EQUATION IN 2D C. ERATH, S. FUNKEN, P. GOLDENITS, AND D. PRAETORIUS

Ulm, Universität

387

Interval Estimation for True Raw and Scale Scores under the Binomial Error Model  

ERIC Educational Resources Information Center

Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

2006-01-01

388

A Posteriori Error Estimates and Mesh Adaptation Strategy for Discontinuous Galerkin Methods Applied to Diffusion Problems  

Microsoft Academic Search

. A posteriori error estimates for locally mass conservative methods forsubsurface ow are presented. These methods are based on discontinuous approximationspaces and referred as Discontinuous Galerkin methods. In the case wherepenalty terms are added to the bilinear form, one obtain the Non-symmetric InteriorPenalty Galerkin methods. In a previous work, we proved a priori exponentialrates of convergence of the methods applied

Béatrice Rivière; Mary F. Wheeler

389

A Posteriori Error Estimates for a Discontinuous Galerkin Method Applied to  

E-print Network

A Posteriori Error Estimates for a Discontinuous Galerkin Method Applied to Elliptic Problems Log spaces and referred as Discontinuous Galerkin methods. In the case where penalty terms are added recently, Baumann and Oden [5] introduced a new Discontinuous Galerkin (DG) method for the solution of pure

Rivière, Béatrice Marie

390

A posteriori error estimator based on gradient recovery by averaging for discontinuous Galerkin methods  

E-print Network

and piecewise constant) diffusion problems in do- mains of R2 , approximated by a discontinuous Galerkin method and a discontinuous Galerkin method with polynomials of any degree. Inspired from the paper [16], which treatsA posteriori error estimator based on gradient recovery by averaging for discontinuous Galerkin

Paris-Sud XI, Université de

391

A priori error estimate for the Baumann–Oden version of the discontinuous Galerkin method  

Microsoft Academic Search

This work presents an a priori error estimate for hp finite element approximations obtained by the Baumann–Oden version of the Discontinuous Galerkin method. If it is now well known that the method converges with an optimal rate in h, this has not been yet proved or disproved with respect to p. For the Poisson problem and for solutions with regularity

Serge Prudhomme; Frédéric Pascal; J. Tinsley Oden; Albert Romkes

2001-01-01

392

POINTWISE ERROR ESTIMATES FOR DISCONTINUOUS GALERKIN METHODS WITH LIFTING OPERATORS FOR  

E-print Network

POINTWISE ERROR ESTIMATES FOR DISCONTINUOUS GALERKIN METHODS WITH LIFTING OPERATORS FOR ELLIPTIC discontinuous Galerkin methods with lifting operators appearing in their corresponding bilinear forms. We Galerkin (DG) methods for elliptic problems have received con- siderable attention in the last few years

Guzmán, Johnny

393

Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review  

PubMed Central

Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739

2013-01-01

394

Error estimates of the finite element method for the exterior Helmholtz problem with a modified DtN boundary condition  

NASA Astrophysics Data System (ADS)

A priori error estimates in the H1- and L2-norms are established for the finite element method applied to the exterior Helmholtz problem, with modified Dirichlet-to-Neumann (MDtN) boundary condition. The error estimates include the effect of truncation of the MDtN boundary condition as well as that of discretization of the finite element method. The error estimate in the L2-norm is sharper than that obtained by the author [D. Koyama, Error estimates of the DtN finite element method for the exterior Helmholtz problem, J. Comput. Appl. Math. 200 (1) (2007) 21-31] for the truncated DtN boundary condition.

Koyama, Daisuke

2009-10-01

395

Variance components in errors-in-variables models: estimability, stability and bias analysis  

NASA Astrophysics Data System (ADS)

Although total least squares has been substantially investigated theoretically and widely applied in practical applications, almost nothing has been done to simultaneously address the estimation of parameters and the errors-in-variables (EIV) stochastic model. We prove that the variance components of the EIV stochastic model are not estimable, if the elements of the random coefficient matrix can be classified into two or more groups of data of the same accuracy. This result of inestimability is surprising as it indicates that we have no way of gaining any knowledge on such an EIV stochastic model. We demonstrate that the linear equations for the estimation of variance components could be ill-conditioned, if the variance components are theoretically estimable. Finally, if the variance components are estimable, we derive the biases of their estimates, which could be significantly amplified due to a large condition number.

Xu, Peiliang; Liu, Jingnan

2014-08-01

396

Estimation via corrected scores in general semiparametric regression models with error-prone covariates  

PubMed Central

This paper considers the problem of estimation in a general semiparametric regression model when error-prone covariates are modeled parametrically while covariates measured without error are modeled nonparametrically. To account for the effects of measurement error, we apply a correction to a criterion function. The specific form of the correction proposed allows Monte Carlo simulations in problems for which the direct calculation of a corrected criterion is difficult. Therefore, in contrast to methods that require solving integral equations of possibly multiple dimensions, as in the case of multiple error-prone covariates, we propose methodology which offers a simple implementation. The resulting methods are functional, they make no assumptions about the distribution of the mismeasured covariates. We utilize profile kernel and backfitting estimation methods and derive the asymptotic distribution of the resulting estimators. Through numerical studies we demonstrate the applicability of proposed methods to Poisson, logistic and multivariate Gaussian partially linear models. We show that the performance of our methods is similar to a computationally demanding alternative. Finally, we demonstrate the practical value of our methods when applied to Nevada Test Site (NTS) Thyroid Disease Study data. PMID:22773940

Maity, Arnab; Apanasovich, Tatiyana V.

2011-01-01

397

Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates  

SciTech Connect

Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

2009-08-01

398

Uniform error estimates in the finite element method for a singularly perturbed reaction-diffusion problem  

NASA Astrophysics Data System (ADS)

Consider the problem -epsilon^2Delta u+uDf with homogeneous Neumann boundary condition in a bounded smooth domain in mathbb{R}^N . The whole range 0error at each point depends on h and epsilon is presented. As an application, first order error estimates in h , which are uniform with respect to epsilon , are given.

Leykekhman, Dmitriy

2008-03-01

399

Estimation of high return period flood quantiles using additional non-systematic information with upper bounded statistical models  

Microsoft Academic Search

This paper proposes the estimation of high return period quantiles using upper bounded distribution functions with Systematic and additional Non-Systematic information. The aim of the developed methodology is to reduce the estimation uncertainty of these quantiles, assuming the upper bound parameter of these distribution functions as a statistical estimator of the Probable Maximum Flood (PMF). Three upper bounded distribution functions,

B. A. Botero; F. Francés

2010-01-01

400

The focus-to-detector distance as a source of systematical errors in the measurement of Chaoul therapy units.  

PubMed

The skin exposure rates measured on 22 Chaoul units in two consecutive years were compared and their variance was analysed. The statistical fluctuation of the ionization method was 3.1% by a factor of about 2 to 2.5 smaller than the variations due to lack of reproducibility of the Chaoul units. The authors observed systematical errors among exposure rate measurement performed at different focus-to-detector distances. The effective source-to-detector distance is different for various ionization chambers. It is the sum of nominal focus-to-detector distance plus a geometrical constant. The geometrical constant is for a particular chamber only to a small extent dependent on the front wall thickness and on the focus-to detector distance. Sufficient standardization of both calibration procedure and construction of ionization chambers may help in avoiding this effect. PMID:7434399

Zaránd, P

1980-09-01

401

A posteriori estimation and adaptive control of the error in the quantity of interest. Part I: A posteriori estimation of the error in the von Mises stress and the stress intensity factor  

Microsoft Academic Search

In this paper we address the problem of a posteriori estimation of the error in an engineering quantity of interest which is computed from a finite element solution. As an example we consider the plane elasticity problem with the von Mises stress and the stress intensity factor, as the quantities of interest. The estimates of the error in the von

T. Strouboulis; I. Babu?ka; D. K. Datta; K. Copps; S. K. Gangaraj

2000-01-01

402

Unified Description of Efficiency Correction and Error Estimation for Moments of Conserved Quantities in Heavy-Ion Collisions  

E-print Network

I provide a unified description of efficiency correction and error estimation for moments of conserved quantifies in heavy-ion collisions. Moments and cumulants are expressed in terms of the factorial moments, which can be easily corrected for the efficiency effect. By deriving the covariance between factorial moments, one can obtain the general error formula for the efficiency corrected moments based on the error propagation derived from the Delta theorem. The skellam distribution based Monto Carlo simulation is used to test the Delta theorem and Bootstrap error estimation methods. The statistical errors calculated from the two methods can well reflect the statistical fluctuations of the efficiency corrected moments.

Xiaofeng Luo

2015-04-05

403

Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests.  

PubMed

The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2(4) full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors' impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors' influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world. PMID:25151444

Strömberg, Sten; Nistor, Mihaela; Liu, Jing

2014-11-01

404

On the error in crop acreage estimation using satellite (LANDSAT) data  

NASA Technical Reports Server (NTRS)

The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.

Chhikara, R. (principal investigator)

1983-01-01

405

Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates  

NASA Astrophysics Data System (ADS)

In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

Jakeman, J. D.; Wildey, T.

2015-01-01

406

DTI quality control assessment via error estimation from Monte Carlo simulations  

NASA Astrophysics Data System (ADS)

Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.

Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

2013-03-01

407

Mass load estimation errors utilizing grab sampling strategies in a karst watershed  

USGS Publications Warehouse

Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

2003-01-01

408

Volumetric apparatus for hydrogen adsorption and diffusion measurements: Sources of systematic error and impact of their experimental resolutions  

SciTech Connect

The development of a volumetric apparatus (also known as a Sieverts’ apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.

Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy)] [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy); Abate, Salvatore; Desiderio, Giovanni [DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende (CS), Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy)] [DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende (CS), Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy); Agostino, Raffaele Giuseppe [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy) [Dipartimento di Fisica, Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy); DeltaE s.r.l., c/o Università della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende (CS), Italy and CNR-IPCF LiCryL, c/o Università della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende (CS) (Italy)

2013-10-15

409

Error estimation for moment analysis in heavy-ion collision experiment  

NASA Astrophysics Data System (ADS)

Higher moments of conserved quantities are predicted to be sensitive to the correlation length and connected to the thermodynamic susceptibility. Thus, higher moments of net-baryon, net-charge and net-strangeness have been extensively studied theoretically and experimentally to explore phase structure and bulk properties of QCD matters created in a heavy-ion collision experiment. As the higher moment analysis is a statistic hungry study, the error estimation is crucial to extract physics information from the limited experimental data. In this paper, we will derive the limit distributions and error formula based on the delta theorem in statistics for various order moments used in the experimental data analysis. The Monte Carlo simulation is also applied to test the error formula.

Luo, Xiaofeng

2012-02-01

410

A posteriori error estimations of a coupled mixed and standard Galerkin method for second order operators  

NASA Astrophysics Data System (ADS)

In this paper, we consider a discretization method proposed by Wieners and Wohlmuth [The coupling of mixed and conforming finite element discretizations, in: Domain Decomposition Methods, vol. 10, Contemporary Mathematics, vol. 218, American Mathematical Society, Providence RI, 1998, pp. 547-554] (see also [R.D. Lazarov, J. Pasciak, P.S. Vassilevski, Iterative solution of a coupled mixed and standard Galerkin discretization method for elliptic problems, Numer. Linear Algebra Appl. 8 (2001) 13-31]) for second order operators, which is a coupling between a mixed method in a subdomain and a standard Galerkin method in the remaining part of the domain. We perform an a posteriori error analysis of residual type of this method, by combining some arguments from a posteriori error analysis of Galerkin methods and mixed methods. The reliability and efficiency of the estimator are proved. Some numerical tests are presented and confirm the theoretical error bounds.

Creuse, Emmanuel; Nicaise, Serge

2008-03-01

411

Forest canopy height estimation using ICESat/GLAS data and error factor analysis in Hokkaido, Japan  

NASA Astrophysics Data System (ADS)

Spaceborne light detection and ranging (LiDAR) enables us to obtain information about vertical forest structure directly, and it has often been used to measure forest canopy height or above-ground biomass. However, little attention has been given to comparisons of the accuracy of the different estimation methods of canopy height or to the evaluation of the error factors in canopy height estimation. In this study, we tested three methods of estimating canopy height using the Geoscience Laser Altimeter System (GLAS) onboard NASA's Ice, Cloud, and land Elevation Satellite (ICESat), and evaluated several factors that affected accuracy. Our study areas were Tomakomai and Kushiro, two forested areas on Hokkaido in Japan. The accuracy of the canopy height estimates was verified by ground-based measurements. We also conducted a multivariate analysis using quantification theory type I (multiple-regression analysis of qualitative data) and identified the observation conditions that had a large influence on estimation accuracy. The method using the digital elevation model was the most accurate, with a root-mean-square error (RMSE) of 3.2 m. However, GLAS data with a low signal-to-noise ratio (?10.0) and that taken from September to October 2009 had to be excluded from the analysis because the estimation accuracy of canopy height was remarkably low. After these data were excluded, the multivariate analysis showed that surface slope had the greatest effect on estimation accuracy, and the accuracy dropped the most in steeply sloped areas. We developed a second model with two equations to estimate canopy height depending on the surface slope, which improved estimation accuracy (RMSE = 2.8 m). These results should prove useful and provide practical suggestions for estimating forest canopy height using spaceborne LiDAR.

Hayashi, Masato; Saigusa, Nobuko; Oguma, Hiroyuki; Yamagata, Yoshiki

2013-07-01

412

Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model  

NASA Technical Reports Server (NTRS)

Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

2000-01-01

413

Synthetic Spike-in Standards Improve Run-Specific Systematic Error Analysis for DNA and RNA Sequencing  

PubMed Central

While the importance of random sequencing errors decreases at higher DNA or RNA sequencing depths, systematic sequencing errors (SSEs) dominate at high sequencing depths and can be difficult to distinguish from biological variants. These SSEs can cause base quality scores to underestimate the probability of error at certain genomic positions, resulting in false positive variant calls, particularly in mixtures such as samples with RNA editing, tumors, circulating tumor cells, bacteria, mitochondrial heteroplasmy, or pooled DNA. Most algorithms proposed for correction of SSEs require a data set used to calculate association of SSEs with various features in the reads and sequence context. This data set is typically either from a part of the data set being “recalibrated” (Genome Analysis ToolKit, or GATK) or from a separate data set with special characteristics (SysCall). Here, we combine the advantages of these approaches by adding synthetic RNA spike-in standards to human RNA, and use GATK to recalibrate base quality scores with reads mapped to the spike-in standards. Compared to conventional GATK recalibration that uses reads mapped to the genome, spike-ins improve the accuracy of Illumina base quality scores by a mean of 5 Phred-scaled quality score units, and by as much as 13 units at CpG sites. In addition, since the spike-in data used for recalibration are independent of the genome being sequenced, our method allows run-specific recalibration even for the many species without a comprehensive and accurate SNP database. We also use GATK with the spike-in standards to demonstrate that the Illumina RNA sequencing runs overestimate quality scores for AC, CC, GC, GG, and TC dinucleotides, while SOLiD has less dinucleotide SSEs but more SSEs for certain cycles. We conclude that using these DNA and RNA spike-in standards with GATK improves base quality score recalibration. PMID:22859977

Zook, Justin M.; Samarov, Daniel; McDaniel, Jennifer; Sen, Shurjo K.; Salit, Marc

2012-01-01

414

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 1, JANUARY 2010 337 Mitigating Channel Estimation Error With  

E-print Network

(CCI) problems are among the main causes of performance degra- dation in wireless networks estimation error and CCI. Two main performance criteria, namely, the traditional outage probability measures the reduction in the SNR due to channel estimation error or CCI. Taking into consideration

Liu, K. J. Ray

415

A priori error estimates for semi-discrete discontinuous Galerkin methods solving nonlinear Hamilton-Jacobi equations with smooth solutions  

E-print Network

A priori error estimates for semi-discrete discontinuous Galerkin methods solving nonlinear In this paper, we provide a priori L2 error estimates for the semi-discrete discontinu- ous Galerkin method [3] and the local discontinuous Galerkin method [22] for one- and two-dimensional nonlinear Hamilton

Shu, Chi-Wang

416

COMPARISON OF VARIANCE ESTIMATORS OF THE HORVITZ-THOMPSON ESTIMATOR FOR RANDOMIZED VARIABLE PROBABILITY SYSTEMATIC SAMPLING  

EPA Science Inventory

Two large-scale environmental surveys, the National Stream Survey (NSS) and the Environmental Protection Agency's proposed Environmental Monitoring and Assessment Program (EMAP), motivated investigation of estimators of the variance of the Horvitz-Thompson estimator under variabl...

417

A posteriori error estimates for finite element approximations of the Cahn-Hilliard equation and the Hele-Shaw flow  

Microsoft Academic Search

This paper develops a posteriori error estimates of residual type for conforming and mixed finite element approximations of the fourth order Cahn-Hilliard equation $u_t+\\\\De\\\\bigl(\\\\eps \\\\De u-\\\\eps^{-1} f(u)\\\\bigr)=0$. It is shown that the {\\\\it a posteriori} error bounds depends on $\\\\eps^{-1}$ only in some low polynomial order, instead of exponential order. Using these a posteriori error estimates, we construct an adaptive

Xiaobing Feng; Haijun Wu

2007-01-01

418

Effects of error covariance structure on estimation of model averaging weights and predictive performance  

SciTech Connect

When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.

2013-07-23

419

Effects of error covariance structure on estimation of model averaging weights and predictive performance  

USGS Publications Warehouse

When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.

Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

2013-01-01

420

Mapping the Origins of Time: Scalar Errors in Infant Time Estimation  

PubMed Central

Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and 14-month-olds’ responses to the omission of a recurring target, on either a 3- or 5-s cycle. At all ages (a) both fixation and pupil dilation measures were time locked to the periodicity of the test interval, and (b) estimation errors grew linearly with the length of the interval, suggesting that trademark interval timing is in place from 4 months. PMID:24979472

2014-01-01

421

On the extreme accuracy of maximum entropy spectrum estimation from an error-free autocorrelation function  

Microsoft Academic Search

The Maximum Entropy Method (MEM) is compared to the periodogram method (DFT) for the estimation of line spectra given an error-free autocorrelation function (ACF). In one computer simulation run, a 250 lag ACF was generated as the sum of 63 cosinusoids with given amplitudes, Ai, and wave numbers, fi. The wave numbers cover a band from 0 to 89.239 cm-1with

Paul F. Fougere; Hanscom AFB

1987-01-01

422

Estimation of Aperture Errors with Direct Interferometer-Output Feedback for Spacecraft Formation Control  

NASA Technical Reports Server (NTRS)

Long-baseline space interferometers involving formation flying of multiple spacecraft hold great promise as future space missions for high-resolution imagery. The major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to control these spacecraft and their optics payloads in the specified configuration accurately. In this paper, we describe our effort toward fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present an estimation procedure that effectively extracts relative x/y translational exit pupil aperture deviations from the raw interferometric image with small estimation errors.

Lu, Hui-Ling; Cheng, Victor H. L.; Leitner, Jesse A.; Carpenter, Kenneth G.

2004-01-01

423

Estimate of procession and polar motion errors from planetary encounter station location solutions  

NASA Technical Reports Server (NTRS)

Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.

Pease, G. E.

1978-01-01

424

Blending dynamic and static estimates of error covariance to enable successful ocean data assimilation  

NASA Astrophysics Data System (ADS)

We are interested in global ocean data assimilation for the purpose of initializing climate models. We consider ensemble-based data assimilation in the context of the POP (Parallel Ocean Program) OGCM and ask the specific question of whether there is a state of the modelled ocean circulation that is compatible with observations of the ocean as represented in the World Ocean Database when the OGCM is forced by CORE v2 (Coordinated Ocean Reference Experiment version 2) estimate of the atmosphere. A frequent problem that plagues ensemble-based ocean data assimilation is filter divergence wherein poor or collapsed ensemble spread is interpreted by the filter as high certainty in the model forecast forcing it to neglect observations---in turn leading to increased RMS error---and an eventual failure of the assimilation. A blending of the dynamically estimated, ensemble-based error covariance with a static background estimate of the error covariance allows us to circumvent the above problem and enables successful data assimilation. Results from a successful assimilation experiment will be discussed.

Nadiga, B. T.; Casper, W. R.

2012-04-01

425

Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis  

NASA Technical Reports Server (NTRS)

Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

Abdol-Hamid, Khaled S.; Ghaffari, Farhad

2012-01-01

426

Sampling error study for rainfall estimate by satellite using a stochastic model  

NASA Technical Reports Server (NTRS)

In a parameter study of satellite orbits, sampling errors of area-time averaged rain rate due to temporal sampling by satellites were estimated. The sampling characteristics were studied by accounting for the varying visiting intervals and varying fractions of averaging area on each visit as a function of the latitude of the grid box for a range of satellite orbital parameters. The sampling errors were estimated by a simple model based on the first-order Markov process of the time series of area averaged rain rates. For a satellite of nominal Tropical Rainfall Measuring Mission (Thiele, 1987) carrying an ideal scanning microwave radiometer for precipitation measurements, it is found that sampling error would be about 8 to 12 pct of estimated monthly mean rates over a grid box of 5 X 5 degrees. It is suggested that an observation system based on a low inclination satellite combined with a sunsynchronous satellite simultaneously might be the best candidate for making precipitation measurements from space.

Shin, Kyung-Sup; North, Gerald R.

1988-01-01

427

Error analysis of leaf area estimates made from allometric regression models  

NASA Technical Reports Server (NTRS)

Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).

Feiveson, A. H.; Chhikara, R. S.

1986-01-01

428

Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error  

PubMed Central

This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n?1/(p?4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

Xue, Hongqi; Miao, Hongyu; Wu, Hulin

2010-01-01

429

Evaluation of errors made in solar irradiance estimation due to averaging the Angstrom turbidity coefficient  

NASA Astrophysics Data System (ADS)

Even though the monitoring of solar radiation experienced a vast progress in the recent years both in terms of expanding the measurement networks and increasing the data quality, the number of stations is still too small to achieve accurate global coverage. Alternatively, various models for estimating solar radiation are exploited in many applications. Choosing a model is often limited by the availability of the meteorological parameters required for its running. In many cases the current values of the parameters are replaced with daily, monthly or even yearly average values. This paper deals with the evaluation of the error made in estimating global solar irradiance by using an average value of the Angstrom turbidity coefficient instead of its current value. A simple equation relating the relative variation of the global solar irradiance and the relative variation of the Angstrom turbidity coefficient is established. The theoretical result is complemented by a quantitative assessment of the errors made when hourly, daily, monthly or yearly average values of the Angstrom turbidity coefficient are used at the entry of a parametric solar irradiance model. The study was conducted with data recorded in 2012 at two AERONET stations in Romania. It is shown that the relative errors in estimating global solar irradiance (GHI) due to inadequate consideration of Angstrom turbidity coefficient may be very high, even exceeding 20%. However, when an hourly or a daily average value is used instead of the current value of the Angstrom turbidity coefficient, the relative errors are acceptably small, in general less than 5%. All results prove that in order to correctly reproduce GHI for various particular aerosol loadings of the atmosphere, the parametric models should rely on hourly or daily Angstrom turbidity coefficient values rather than on the more usual monthly or yearly average data, if currently measured data is not available.

Calinoiu, Delia-Gabriela; Stefu, Nicoleta; Paulescu, Marius; Trif-Tordai, Gavril?; Mares, Oana; Paulescu, Eugenia; Boata, Remus; Pop, Nicolina; Pacurar, Angel

2014-12-01

430

Frequency Offset Estimation Technique Based on Error Characterization for OFDM Communications on Time-Varying Multipath Fading Channels  

Microsoft Academic Search

A novel frequency offset estimation technique based on maximum-likelihood estimation (MLE) for wireless OFDM communications over a doubly-selective fading channel is proposed. By taking advantage of subcarrier-level differential operation and coherent error characterization, the proposed estimator can effectively overcome frequency-selective fading effects. Frequency error characterization is achieved by pseudo-noise (PN) matched filters (MF) in the frequency direction; thus, the proposed

Jia-Chin Lin

2006-01-01

431

Optimum data weighting and error calibration for estimation of gravitational parameters  

NASA Technical Reports Server (NTRS)

A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

Lerch, F. J.

1989-01-01

432

Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains  

NASA Technical Reports Server (NTRS)

Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

2013-01-01

433

DTI Quality Control Assessment via Error Estimation From Monte Carlo Simulations.  

PubMed

Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing microscopic tissue structure in the white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC. PMID:23833547

Farzinfar, Mahshid; Li, Yin; Verde, Audrey R; Oguz, Ipek; Gerig, Guido; Styner, Martin A

2013-03-13

434

Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation  

NASA Technical Reports Server (NTRS)

A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

Kalton, G.

1983-01-01

435

A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation  

NASA Technical Reports Server (NTRS)

The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (principal investigators)

1978-01-01

436

Statistical and Systematic Errors in the Measurement of Weak-Lensing Minkowski Functionals: Application to the Canada-France-Hawaii Lensing Survey  

NASA Astrophysics Data System (ADS)

The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degrades the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ~1400 deg2 will constrain the dark energy equation of the state parameter with an error of ?w 0 ~ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1? error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density \\Omega _m0 = 0.256+/- ^{0.054}_{0.046}.

Shirasaki, Masato; Yoshida, Naoki

2014-05-01

437

Save now, pay later? Multi-period many-objective groundwater monitoring design given systematic model errors and uncertainty  

NASA Astrophysics Data System (ADS)

This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.

Reed, P. M.; Kollat, J. B.

2012-01-01