Note: This page contains sample records for the topic estimated systematic error from Science.gov.
While these samples are representative of the content of Science.gov,
they are not comprehensive nor are they the most current set.
We encourage you to perform a real-time search of Science.gov
to obtain the most current and comprehensive results.
Last update: November 12, 2013.
1

Systematic errors in ground heat flux estimation and their correction  

NASA Astrophysics Data System (ADS)

Incoming radiation forcing at the land surface is partitioned among the components of the surface energy balance in varying proportions depending on the time scale of the forcing. Based on a land-atmosphere analytic continuum model, a numerical land surface model, and field observations we show that high-frequency fluctuations in incoming radiation (with period less than 6 h, for example, due to intermittent clouds) are preferentially partitioned toward ground heat flux. These higher frequencies are concentrated in the 0-1 cm surface soil layer. Subsequently, measurements even at a few centimeters deep in the soil profile miss part of the surface soil heat flux signal. The attenuation of the high-frequency soil heat flux spectrum throughout the soil profile leads to systematic errors in both measurements and modeling, which require a very fine sampling near the soil surface (0-1 cm). Calorimetric measurement techniques introduce a systematic error in the form of an artificial band-pass filter if the temperature probes are not placed at appropriate depths. In addition, the temporal calculation of the change in the heat storage term of the calorimetric method can further distort the reconstruction of the surface soil heat flux signal. A correction methodology is introduced which provides practical application as well as insights into the estimation of surface soil heat flux and the closure of surface energy balance based on field measurements.

Gentine, P.; Entekhabi, D.; Heusinkveld, B.

2012-09-01

2

Systematic errors in radar wind estimation: Implications for comparative measurements  

Microsoft Academic Search

SA, IDI, and DBS horizontal wind estimators are examined and demonstrated to share common biases under inhomogeneous flow field conditions. The biases can reach magnitudes of several tens of meters per second at mesospheric heights, contaminate gravity wave flow field measurements, cause distortions in measured flow field spectra, and possibly account for large wind measurement errors evidenced in recent experimental

Erhan Kudeki; Prabhat K. Rastogi; Fahri Sürücü

1993-01-01

3

Systematic Errors in Radar Wind Estimation: Implications for Comparative Measurements  

NASA Astrophysics Data System (ADS)

SA, IDI, and DBS horizontal wind estimators are examined and demonstrated to share common biases under inhomogeneous flow field conditions. The biases can reach magnitudes of several tens of meters per second at mesospheric heights, contaminate gravity wave flow field measurements, cause distortions in measured flow field spectra, and possibly account for large wind measurement errors evidenced in recent experimental studies including the AIDA'89 effort. However, being a linear response to gravity wave flow field fluctuations, the biases are expected to cancel out in long-term wind and tidal statistics compiled with radar wind data.

Kudeki, Erhan; Rastogi, Prabhat K.; Sürücü, Fahri

1993-03-01

4

Estimates of the systematic errors of the optoelectronic measuring equipment of navigation systems  

NASA Astrophysics Data System (ADS)

A method is proposed for monitoring and estimating the systematic errors of measurements conducted by the optoelectronic measuring package of astronavigation systems. The principle of the method, which takes advantage of the high stability of the relative position of navigation stars, is discussed. By using a priori information about the position of navigation stars, methods for monitoring and estimating systematic errors can be developed also for other types of systems relying on the measurement of the angular coordinates of two or more stars.

Goliakov, A. D.; Serogodskii, V. V.

1990-08-01

5

Systematic error revisited  

SciTech Connect

The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

Glosup, J.G.; Axelrod, M.C.

1996-08-05

6

Systematic Procedural Error.  

National Technical Information Service (NTIS)

Even when executing routine procedures with relatively simple devices, people make nonrandom errors. Consequences range from the trivial to the fatal, with Navy personnel often operating at the more extreme end of this range. This problem has received sur...

M. D. Byrne

2006-01-01

7

Estimation of systematic errors in UHE CR energy reconstruction for ANITA-3 experiment  

NASA Astrophysics Data System (ADS)

The third mission of the balloon-borne ANtarctic Impulsive Transient Antenna (ANITA-3) scheduled for December 2013 will be optimized for the measurement of impulsive radio signals from Ultra-High Energy Cosmic Rays (UHE CR), i.e. charged particles with energies above 10^19 eV, in addition to the neutrinos ANITA was originally designed for. The event reconstruction algorithm for UHE CR relies on the detection of radio emissions in the frequency range 200-1200 MHz (RF) produced by the charged component of Extensive Air Showers initiated by these particles. The UHE CR energy reconstruction method for ANITA is subject to systematic uncertainties introduced by models used in Monte Carlo simulations of RF. The presented study is aimed at evaluating these systematic uncertainties by comparing outputs of two RF simulation codes, CoREAS and ZHAireS, for different event statistics and propagating the differences in the outputs through the energy reconstruction method.

Bugaev, Viatcheslav; Rauch, Brian; Binns, Robert; Israel, Martin; Belov, Konstantin; Wissel, Stephanie; Romero-Wolf, Andres

2013-04-01

8

Effect of systematic and random errors on the confidence of cutting tool wear estimates  

NASA Astrophysics Data System (ADS)

The principal requirements that are essential for increasing the confidence level of estimates of the comparative cutting tool wear are discussed. Particular attention is given to the problem of the justified selection of the minimum sufficient sample size for the statistical estimation of the wear resistance of cutting tools with a specified level of confidence.

Trakhtenberg, B. F.; Safiulin, G. G.; Tarasov, A. V.

9

Improving sidescan sonar mosaic accuracy by accounting for systematic errors  

Microsoft Academic Search

Estimating the systematic errors associated with sensor geometry during data acquisition and applying the appropriate corrections can improve the accuracy of a sidescan sonar mosaic. We first review existing techniques for assembling mosaics and explain where systematic errors arise within typical sidescan sonar surveys. We then describe a new technique called systematic error reduction and compare its advantages to the

D. Charlot; R. Schaaf; X. Brossard

2001-01-01

10

Evaluation of Data with Systematic Errors  

SciTech Connect

Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward.

Froehner, F. H. [Forschungszentrum Karlsruhe (Germany)

2003-11-15

11

Estimating GPS Positional Error  

NSDL National Science Digital Library

After instructing students on basic receiver operation, each student will make many (10-20) position estimates of 3 benchmarks over a week. The different benchmarks will have different views of the skies or vegetation cover. Each student will download their data into a spreadsheet and calculate horizontal and vertical errors which are collated into a class spreadsheet. The positions are sorted by error and plotted in a cumulative frequency plot. The students are encouraged to discuss the distribution, sources of error, and estimate confidence intervals. This exercise gives the students a gut feeling for confidence intervals and the accuracy of data. Students are asked to compare results from different types of data and benchmarks with different views of the sky. Uses online and/or real-time data Has minimal/no quantitative component Addresses student fear of quantitative aspect and/or inadequate quantitative skills Addresses student misconceptions

Witte, Bill

12

Bioelectrical impedance analysis to estimate body composition in children and adolescents: a systematic review and evidence appraisal of validity, responsiveness, reliability and measurement error.  

PubMed

Bioelectrical impedance analysis (BIA) is a practical method to estimate percentage body fat (%BF). In this systematic review, we aimed to assess validity, responsiveness, reliability and measurement error of BIA methods in estimating %BF in children and adolescents.We searched for relevant studies in Pubmed, Embase and Cochrane through November 2012. Two reviewers independently screened titles and abstracts for inclusion, extracted data and rated methodological quality of the included studies. We performed a best evidence synthesis to synthesize the results, thereby excluding studies of poor quality. We included 50 published studies. Mean differences between BIA and reference methods (gold standard [criterion validity] and convergent measures of body composition [convergent validity]) were considerable and ranged from negative to positive values, resulting in conflicting evidence for criterion validity. We found strong evidence for a good reliability, i.e. (intra-class) correlations ?0.82. However, test-retest mean differences ranged from 7.5% to 13.4% of total %BF in the included study samples, indicating considerable measurement error. Our systematic review suggests that BIA is a practical method to estimate %BF in children and adolescents. However, validity and measurement error are not satisfactory. PMID:23848977

Talma, H; Chinapaw, M J M; Bakker, B; Hirasing, R A; Terwee, C B; Altenburg, T M

2013-07-12

13

Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry  

NASA Astrophysics Data System (ADS)

The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.

Pradel, N.; Charlot, P.; Lestrade, J.-F.

2005-12-01

14

A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers  

SciTech Connect

The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R. [William B. Hanson Center for Space Sciences, University of Texas at Dallas, 800 W. Campbell Rd. WT15, Richardson, Texas 75080 (United States)

2009-05-15

15

Measuring Systematic Error with Curve Fits  

NASA Astrophysics Data System (ADS)

Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model.1-3 In this paper I give three examples in which my students use popular curve-fitting software and adjust the theoretical model to account for, and even exploit, the presence of systematic errors in measured data.

Rupright, Mark E.

2011-01-01

16

Omega Wind-Error Estimation.  

National Technical Information Service (NTIS)

The estimation of winds and their errors for Omega radiosonde soundings has been addressed by several authors in the past years. Major improvements over an earlier wind error model have been in modeling VLF propagation for estimation of phase variances of...

M. L. Olson R. M. Passi A. Schumann

1978-01-01

17

Measuring Systematic Error with Curve Fits  

ERIC Educational Resources Information Center

Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

Rupright, Mark E.

2011-01-01

18

Assessment of systematic errors in the calibration of SPECT systems for I-131 dosimetry estimations in radio-immunotherapy  

Microsoft Academic Search

In radioimmunotherapy, SPECT has a potentially important role in dosimetry estimates for the critical organ systems and target tumor(s). The ability to quantitate activity, within a tomographic plane, has been addressed for low photon energy. However, the practical problems in calibrating SPECT systems for measuring activity in volume sources include the uncertainties associated with: a) selecting volumes of interest (VOI)

L. P. Clarke; C. B. Saw; L. Leong; A. Heal; G. Sfakianakis; F. Ashkar; A. Serafini

1984-01-01

19

Defining random and systematic error in precipitation interpolation  

NASA Astrophysics Data System (ADS)

Variogram-based interpolation methods are widely applied for hydrology. Kriging estimates an expectation value and an associated distribution while simulations provide a distribution of possible realizations of the random function at the unknown location. The associated error in both cases is random and characterized by the convergence of its sum over time to zero, being convenient for subsequent hydrological modelling. This study addresses the quantification of a random and a systematic error for the mentioned interpolation methods. Firstly, monthly precipitation observations are fit to a two-parametric, theoretical distribution at each observation point. Prior to interpolation, the observations are decomposed into two distribution parameters and their corresponding quantiles. The distribution parameters and their quantiles are interpolated to the unknown location and finally recomposed back to precipitation amounts. This method bears the capability of addressing two types of errors: a random error defined by simulating the quantiles and associated expectation value of the parameters, and a systematic error defined by simulating the parameters and the expectation value of the quantiles. The defined random error converges over time to zero while the systematic error does not, but creates a bias. With perspective to subsequent hydrological modelling, the input uncertainty of the interpolated (areal) precipitation is thus described by a random and a systematic error.

Lebrenz, H.; Bárdossy, A.

2012-04-01

20

Least Absolute Relative Error Estimation  

PubMed Central

Multiplicative regression model or accelerated failure time model, which becomes linear regression model after logarithmic transformation, is useful in analyzing data with positive responses, such as stock prices or life times, that are particularly common in economic/financial or biomedical studies. Least squares or least absolute deviation are among the most widely used criterions in statistical estimation for linear regression model. However, in many practical applications, especially in treating, for example, stock price data, the size of relative error, rather than that of error itself, is the central concern of the practitioners. This paper offers an alternative to the traditional estimation methods by considering minimizing the least absolute relative errors for multiplicative regression models. We prove consistency and asymptotic normality and provide an inference approach via random weighting. We also specify the error distribution, with which the proposed least absolute relative errors estimation is efficient. Supportive evidence is shown in simulation studies. Application is illustrated in an analysis of stock returns in Hong Kong Stock Exchange.

CHEN, Kani; GUO, Shaojun; YING, Zhiliang

2013-01-01

21

Systematic errors in long baseline oscillation experiments  

SciTech Connect

This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

Harris, Deborah A.; /Fermilab

2006-02-01

22

Effects of systematic errors in analyses of nuclear scattering data  

NASA Astrophysics Data System (ADS)

The effects of systematic errors in elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. First, to estimate the probability of any unknown systematic errors, select, typical, sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. Specified systematic errors should also be taken into account when high quality fits to data are sought. We have considered such effects due to the finite angular resolution associated with the data in some quite exceptional, heavy ion scattering data sets. Allowing angle shifting of the measured values gave new data sets that are very smooth. Furthermore, when such allowances for systematic errors are so taken into account, reasonable, but not necessarily statistically significant, fits to the original data sets can become so. Therefore, they can be plausible candidates for the ``physical'' descriptions of the scattering processes. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schrödinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions.

Bennett, M. T.; Steward, C.; Amos, K.; Allen, L. J.

1996-08-01

23

Error Approximation and Minimum Phone Error Acoustic Model Estimation  

Microsoft Academic Search

Minimum phone error (MPE) acoustic parameter estimation involves calculation\\u000a\\u0009of edit distances (errors) between correct and incorrect hypotheses.\\u000a\\u0009In the context of large-vocabulary continuous-speech recognition,\\u000a\\u0009this error calculation becomes prohibitively expensive and so errors\\u000a\\u0009are approximated. This paper introduces a novel error approximation\\u000a\\u0009technique. Analysis shows that this approximation yields a higher\\u000a\\u0009correlation to the Levenshtein error metric than a

Matthew Gibson; Thomas Hain

2010-01-01

24

How Important Is Transient Error in Estimating Reliability? Going Beyond Simulation Studies  

Microsoft Academic Search

This article introduces a procedure for estimating reliability in which equivalent halves of a given test are systematically created and then administered a few days apart so that transient error can be included in the error calculus. The procedure not only estimates complete reliability (taking into account both specific-factor error and transient error) but also can estimate partial reliability (taking

Gilbert Becker

2000-01-01

25

More on Systematic Error in a Boyle's Law Experiment  

NASA Astrophysics Data System (ADS)

A recent article1 in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

McCall, Richard P.

2012-01-01

26

Reducing Model Systematic Error through Super Modelling  

NASA Astrophysics Data System (ADS)

Numerical models are key tools in the projection of the future climate change. However, state-of-the-art general circulation models (GCMs) exhibit significant systematic errors and large uncertainty exists in future climate projections, because of limitations in parameterization schemes and numerical formulations. The general approach to tackle uncertainty is to use an ensemble of several different GCMs. However, ensemble results may smear out major variability, such as the ENSO. Here we take a novel approach and build a super model (i.e., an optimal combination of several models): We coupled two atmospheric GCMs (AGCM) with one ocean GCM (OGCM). The two AGCMs receive identical boundary conditions from the OGCM, while the OGCM is driven by a weighted flux combination from the AGCMs. The atmospheric models differed in their convection scheme and climate-related parameters. As climate models show large sensitivity to convection schemes and parameterization, this approach may be a good basis for constructing a super model. We performed experiments with a small set of manually chosen coefficients and also with a learning algorithm to adjust the coefficients. The coupling strategy is able to synchronize atmospheric variability of the two AGCMs in the tropics, particularly over the western equatorial Pacific, and produce reasonable climate variability. Different coupling weights were shown to alter the simulated mean climate state. Some improvements were found that suggest a refined strategy for choosing weighting coefficients could lead to even better performance.

Shen, Mao-Lin; Keenlyside, Noel; Selten, Frank; Duane, Gregory; Wiegerinck, Wim; Hiemstra, Paul

2013-04-01

27

Adjoint Error Estimation for Linear Advection  

SciTech Connect

An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

2011-03-30

28

Effects of systematic exposure assessment errors in partially ecologic case-control studies  

Microsoft Academic Search

Background In ecologic studies, group-level rather than individual-level exposure data are used. When using group-level exposure data, established by sufficiently large samples of individual exposure assessments, the bias of the effect estimate due to sampling errors or random assessment errors at the individual-level is generally negligible. In contrast, systematic assessment errors may produce more pronounced errors in the group-level exposure

Jonas Björk; Ulf Strömberg

2002-01-01

29

Sources of Variability and Systematic Error in Mouse Timing Behavior  

Microsoft Academic Search

In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the

C. R. Gallistel; Adam King; Robert McDonald

2004-01-01

30

Mean Square Error Properties of Density Estimates  

Microsoft Academic Search

The rate at which the mean square error decreases as sample size increases is evaluated for general $L^1$ kernel estimates and for the Fourier integral estimate for a probability density function. The estimates are then compared on the basis of these rates.

Kathryn Bullock Davis

1975-01-01

31

Event Generator Validation and Systematic Error Evaluation for Oscillation Experiments  

NASA Astrophysics Data System (ADS)

In this document I will describe the validation and tuning of the physics models in the GENIE neutrino event generator and briefly discuss how oscillation experiments make use of this information in the evaluation of model-related systematic errors.

Gallagher, H.

2009-09-01

32

A systematic approach to SER estimation and solutions  

Microsoft Academic Search

This paper describes a method for estimating Soft Error Rate (SER) and a systematic approach to identifying SER solutions. Having a good SER estimate is the first step in identifying if a problem exists and what measures are necessary to solve the problem. In this paper, a high performance processor is used as the base framework for discussion since it

H. T. Nguyen; Y. Yagil

2003-01-01

33

Estimating error rates in bioactivity databases.  

PubMed

Bioactivity databases are routinely used in drug discovery to look-up and, using prediction tools, to predict potential targets for small molecules. These databases are typically manually curated from patents and scientific articles. Apart from errors in the source document, the human factor can cause errors during the extraction process. These errors can lead to wrong decisions in the early drug discovery process. In the current work, we have compared bioactivity data from three large databases (ChEMBL, Liceptor, and WOMBAT) who have curated data from the same source documents. As a result, we are able to report error rate estimates for individual activity parameters and individual bioactivity databases. Small molecule structures have the greatest estimated error rate followed by target, activity value, and activity type. This order is also reflected in supplier-specific error rate estimates. The results are also useful in identifying data points for recuration. We hope the results will lead to a more widespread awareness among scientists on the frequencies and types of errors in bioactivity data. PMID:24160896

Tiikkainen, Pekka; Bellis, Louisa; Light, Yvonne; Franke, Lutz

2013-10-02

34

Correcting systematic errors in high-sensitivity deuteron polarization measurements  

NASA Astrophysics Data System (ADS)

This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10-5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10-6 in a search for an electric dipole moment using a storage ring.

Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva E Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

2012-02-01

35

Error Estimates for Adaptive Finite Element Computations.  

National Technical Information Service (NTIS)

A mathematical theory is developed for a class of a posteriori error estimates of finite-element solutions. It is based on a general formulation of the finite element method in terms of certain bilinear forms on suitable Hilbert spaces. The main theorem g...

I. Babuska W. C. Rheinboldt

1977-01-01

36

Estimation of Error Rates in Discriminant Analysis  

Microsoft Academic Search

Several methods of estimating error rates in Discriminant Analysis are evaluated by sampling methods. Multivariate normal samples are generated on a computer which have various true probabilities of misclassification for different combinations of sample sizes and different numbers of parameters. The two methods in most common use are found to be significantly poorer than some new methods that are proposed.

Peter A. Lachenbruch; M. Ray Mickey

1968-01-01

37

Linear Model Estimation Errors With Estimated Design Parameters.  

National Technical Information Service (NTIS)

The problem of error estimation of parameters b in a linear model,Y = Xb + e, is considered when the elements of the design matrix X are functions of an unknown design' parameter vector c. An estimated value e is substituted in X to obtain a derived desig...

R. M. Passi

1994-01-01

38

Systematic error in topography-controlled groundwater models  

Microsoft Academic Search

The topography is often used as a boundary condition in groundwater flow models. This boundary condition is only valid if the groundwater table is a subdued exact replica of the topography. Since the water table never follows the topography exactly, applying this boundary condition induces a systematic error that overestimates the velocities of the top part of the saturated subsurface.

L. Marklund; A. Worman

2008-01-01

39

Neutrino Spectrum at the Far Detector Systematic Errors  

Microsoft Academic Search

Neutrino oscillation experiments often employ two identical detectors to minimize errors due to inadequately known neutrino beam. We examine var- ious systematics effects related to the prediction of the neutrino spectrum in the 'far' detector on the basis of the spectrum observed at the 'near' detector. We propose a novel method of the derivation of the far detector spectrum. This

M. Szleper

40

Systematic error in the measurement of very high solar irradiance  

Microsoft Academic Search

The measurement of very high solar irradiance is required in an increasingly wide variety of technical applications. There is really only one commercial supplier of heat flux sensors that can be used for this purpose. These gages are calibrated using a black body as the radiant source. A systematic error has been detected when these heat flux sensors are later

J Ballestr??n; S Ulmer; A Morales; A Barnes; L. W Langley; M Rodr??guez

2003-01-01

41

Geodesy by radio interferometry: Effects of atmospheric modeling errors on estimates of baseline length  

Microsoft Academic Search

Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for  8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere (\\

J. L. Davis; T. A. Herrinch; I. I. Shapiro; A. E. E. Rollers; G. Elgered

1985-01-01

42

Spatial reasoning in the treatment of systematic sensor errors  

SciTech Connect

In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

1988-01-01

43

Discrete-error transport equation for error estimation in CFD  

Microsoft Academic Search

With computational fluid dynamics (CFD) becoming more accepted and more widely used in industry for design and analysis, there is increasing demand for not just more accurate solutions, but also error bounds on the solutions. One major source of error is from the grid or mesh. A number of methods have been developed to quantify errors in solutions of partial

Yuehui Qin

2004-01-01

44

Drawdown Estimation and Quantification of Error  

NASA Astrophysics Data System (ADS)

Drawdowns during aquifer tests can be obscured by barometric pressure changes, tides, regional pumping, and recharge events in the water-level record. These natural and man-induced stresses can create water-level fluctuations that must be removed from observed water levels prior to accurately estimate pumping-induced drawdowns. Simple models have been developed to estimate non-pumping water-levels during aquifer tests. Together these models produce what is referred to here as the synthetic water level. The synthetic water level is the sum of individual time-series models of barometric pressure, tidal potential, or background water levels. The amplitude and phase of each time series are adjusted so that synthetic water levels match measured water levels during periods unaffected by an aquifer test. Differences between synthetic and measured water levels are minimized with a sum-of-squares objective function in an Excel spreadsheet. The root-mean-square errors during fitting and prediction periods were compared multiple times at four geographically diverse sites. Prediction error equaled fitting error when fitting periods were greater than or equal to four times prediction periods.

Halford, K. J.

2005-12-01

45

ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES  

SciTech Connect

In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ 85734 (United States)

2012-05-20

46

Systematic Approach for Decommissioning Planning and Estimating  

SciTech Connect

Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises.

Dam, A. S.

2002-02-26

47

Uncertainty estimation for multiposition form error metrology  

Microsoft Academic Search

We analyze a general multiposition comparator measurement procedure that leads to partial removal of artifact error for a class of problems including roundness metrology, measurement of radial error motions of precision spindles, and figure error metrology of high-accuracy optical components. Using spindle radial error motion as an explicit example, we present a detailed analysis of a complete test with N

W. Tyler Estler; Chris J. Evans; L. Z. Shao

1997-01-01

48

Quantifying Error in the CMORPH Satellite Precipitation Estimates  

NASA Astrophysics Data System (ADS)

As part of the collaboration between China Meteorological Administration (CMA) National Meteorological Information Centre (NMIC) and NOAA Climate Prediction Center (CPC), a new system is being developed to construct hourly precipitation analysis on a 0.25olat/lon grid over China by merging information derived from gauge observations and CMORPH satellite precipitation estimates. Foundation to the development of the gauge-satellite merging algorithm is the definition of the systematic and random error inherent in the CMORPH satellite precipitation estimates. In this study, we quantify the CMORPH error structures through comparisons against a gauge-based analysis of hourly precipitation derived from station reports from a dense network over China. First, systematic error (bias) of the CMORPH satellite estimates are examined with co-located hourly gauge precipitation analysis over 0.25olat/lon grid boxes with at least one reporting station. The CMORPH exhibits biases of regional variations showing over-estimates over eastern China, and seasonal changes with over-/under-estimates during warm/cold seasons. The CMORPH bias presents range-dependency. In general, the CMORPH tends to over-/under-estimate weak / strong rainfall. The bias, when expressed in the form of ratio between the gauge observations and the CMORPH satellite estimates, increases with the rainfall intensity but tends to saturate at a certain level for high rainfall. Based on the above results, a prototype algorithm is developed to remove the CMORPH bias through matching the PDF of original CMORPH estimates against that of the gauge analysis using data pairs co-located over grid boxes with at least one reporting gauge over a 30-day period ending at the target date. The spatial domain for collecting the co-located data pairs is expanded so that at least 5000 pairs of data are available to ensure statistical availability. The bias-corrected CMORPH is then compared against the gauge data to quantify the remaining random error. The results showed that the random error in the bias-corrected CMORPH is proportional to the smoothness of the target precipitation fields, expressed as the standard deviation of the CMORPH fields, and to the size of the spatial domain over which the data pairs to construct the PDF functions are collected. An empirical equation is then defined to compute the random error in the bias-corrected CMORPH from the CMORPH spatial standard deviation and the size of the data collection domain. An algorithm is being developed to combine the gauge analysis with the bias-corrected CMORPH through the optimal interpolation (OI) technique using the error statistics defined in this study. In this process, the bias-corrected CMORPH will be used as the first guess, while the gauge data will be utilized as observations to modify the first guess over regions with gauge network coverage. Detailed results will be reported at the conference.

Xu, B.; Yoo, S.; Xie, P.

2010-12-01

49

Causes of systematic errors in defocused-beam analysis  

NASA Astrophysics Data System (ADS)

Defocused-beam analysis (DEA) has been used to determine the chemical composition of small fragments of the lunar regolith rocks. It is shown that the sum intensity of X-ray radiation of elements under a defocused probe (or in the scanning region) is determined by volume (areal) and not by the weight ratios of these phases. The usual homogeneous correction of DEA data, as well as the Albee correction, is shown to lead to a systematic overestimation of the content of the elements making up the low-density phases. The value of these systematic errors must depend directly on the density difference of phases making up the target. For an appropriate correction of DEA analyses it is necessary to have additional data on the composition and/or modal content of phases in the object analyzed.

Nazarov, M. A.

50

Lidar aerosol backscatter measurements - Systematic, modeling, and calibration error considerations  

NASA Astrophysics Data System (ADS)

Sources of systematic, modeling, and calibration errors that affect the interpretation and calibration of lidar aerosol backscatter data are discussed. The treatment pertains primarily to ground-based pulsed CO2 lidars that probe the troposphere and are calibrated using hard calibration targets. However, a large part of the analysis is relevant to other types of lidar system such as lidars operating at other wavelengths; CW focused lidars; airborne or earth-orbiting lidars; lidars measuring other regions of the atmosphere; lidars measuring nonaerosol elastic or inelastic backscatter; and lidars employing other calibration techniques.

Kavaya, M. J.; Menzies, R. T.

1985-11-01

51

Minimizing systematic errors in phytoplankton pigment concentration derived from satellite ocean color measurements  

SciTech Connect

Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.

Martin, D.L.

1992-01-01

52

Error estimation and mesh optimisation using error in constitutive relation for electromagnetic field computation  

Microsoft Academic Search

This paper presents a complete methodology to control the quality of electromagnetic field computation using the finite element method. An error estimate is built up using the error in the constitutive relation. Proof is made that this estimate relates to the exact error in some cases. Both problems of control of quality and mesh optimisation are then discussed

J.-F. Remacle; P. Dular; F. Henrotte; A. Genon; W. Legros

1995-01-01

53

A posteriori pointwise error estimates for the boundary element method  

SciTech Connect

This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

Paulino, G.H. [Cornell Univ., Ithaca, NY (United States). School of Civil and Environmental Engineering; Gray, L.J. [Oak Ridge National Lab., TN (United States); Zarikian, V. [Univ. of Central Florida, Orlando, FL (United States). Dept. of Mathematics

1995-01-01

54

Statistical and systematic errors in redshift-space distortion measurements from large surveys  

NASA Astrophysics Data System (ADS)

We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate ? = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ?(rp, ?) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on ? as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

2012-12-01

55

On the error estimates of correlation functions  

NASA Astrophysics Data System (ADS)

Analytical formulas are derived for ensemble and bootstrap-resampling errors in two- and three-point correlation functions xi and zeta. The analytical results agree with numerical simulations. Similar derivations are carried out for sparse-sampling errors as well. The fit errors of the parameters (i.e., the amplitude A of xi and the constant Q of zeta) in the regression models are also discussed. The interdependence among the counts in different bins reduces the fit errors. If the ensemble errors sigma-DD(ens) and sigma-DDD(ens) are adopted for the counts, the fit errors of A and Q in each sample are about half of the standard errors obtained from the ensemble of samples. The underestimation of the fit errors due to the bin-bin interdependence is compensated by the overestimation of sigma-DD and sigma-DDD given by the bootstrap-resampling method. The fit errors of the parameters, given by the bootstrap-resampling errors of the counts, give the correct answers.

Mo, H. J.; Jing, Y. P.; Boerner, G.

1992-06-01

56

Application of an error statistics estimation method to the PSAS forecast error covariance model  

NASA Astrophysics Data System (ADS)

In atmospheric data assimilation systems, the forecast error covariance model is an important component. However, the parameters required by a, forecast error covariance model are difficult to obtain due to the absence of the truth. This study applies an error statistics estimation method to the Physical-space Statistical Analysis System (PSAS) height-wind forecast error covariance model. This method consists of two components: the first component computes the error statistics by using the National Meteorological Center (NMC) method, which is a lagged-forecast difference approach, within the framework of the PSAS height-wind forecast error covariance model; the second obtains a calibration formula, to rescale the error standard deviations provided by the NMC method. The calibration is against the error statistics estimated by using a maximum-likelihood estimation (MLE) with rawindsonde height observed-minus-forecast residuals. A complete set of formulas for estimating the error statistics and for the calibration is applied to a one-month-long dataset generated by a general circulation model of the Global Model and Assimilation Office (GMAO), NASA. There is a clear constant relationship between the error statistics estimates of the NMC-method and MLE. The final product provides a full set of 6-hour error statistics required by the PSAS height-wind forecast error covariance model over the globe. The features of these error statistics are examined and discussed.

Yang, R. H.; Guo, J.; Riishojgaard, P.

2006-01-01

57

Systematic errors in free energy perturbation calculations due to a finite sample of configuration space: Sample-size hysteresis  

SciTech Connect

Although the free energy perturbation procedure is exact when an infinite sample of configuration space is used, for finite sample size there is a systematic error resulting in hysteresis for forward and backward simulations. The qualitative behavior of this systematic error is first explored for a Gaussian distribution, then a first-order estimate of the error for any distribution is derived. To first order the error depends only on the fluctuations in the sample of potential energies, {Delta}E, and the sample size, n, but not on the magnitude of {Delta}E. The first-order estimate of the systematic sample-size error is used to compare the efficiencies of various computing strategies. It is found that slow-growth, free energy perturbation calculations will always have lower errors from this source than window-growth, free energy perturbation calculations for the same computing effort. The systematic sample-size errors can be entirely eliminated by going to thermodynamic integration rather than free energy perturbation calculations. When {Delta}E is a very smooth function of the coupling parameter, {lambda}, thermodynamic integration with a relatively small number of windows is the recommended procedure because the time required for equilibration is reduced with a small number of windows. These results give a method of estimating this sample-size hysteresis during the course of a slow-growth, free energy perturbation run. This is important because in these calculations time-lag and sample-size errors can cancel, so that separate methods of estimating and correcting for each are needed. When dynamically modified window procedures are used, it is recommended that the estimated sample-size error be kept constant, not that the magnitude of {Delta}E be kept constant. Tests on two systems showed a rather small sample-size hysteresis in slow-growth calculations except in the first stages of creating a particle, where both fluctuations and sample-size hysteresis are large.

Wood, R.H.; Muehlbauer, W.C.F. (Univ. of Delaware, Newark (United States)); Thompson, P.T. (Swarthmore Coll., PA (United States))

1991-08-22

58

Bias of Nearest Neighbor Error Estimates  

Microsoft Academic Search

The bias of the finite-sample nearest neighbor (NN) error from its asymptotic value is examined. Expressions are obtained which relate the bias of the NN and 2-NN errors to sample size, dimensionality, metric, and distributions. These expressions isolate the effect of sample size from that of the distributions, giving an explicit relation showing how the bias changes as the sample

Keinosuke Fukunaga; Donald M. Hummels

1987-01-01

59

Systematic Review of the Balance Error Scoring System  

PubMed Central

Context: The Balance Error Scoring System (BESS) is commonly used by researchers and clinicians to evaluate balance.A growing number of studies are using the BESS as an outcome measure beyond the scope of its original purpose. Objective: To provide an objective systematic review of the reliability and validity of the BESS. Data Sources: PubMed and CINHAL were searched using Balance Error Scoring System from January 1999 through December 2010. Study Selection: Selection was based on establishment of the reliability and validity of the BESS. Research articles were selected if they established reliability or validity (criterion related or construct) of the BESS, were written in English, and used the BESS as an outcome measure. Abstracts were not considered. Results: Reliability of the total BESS score and individual stances ranged from poor to moderate to good, depending on the type of reliability assessed. The BESS has criterion-related validity with force plate measures; more difficult stances have higher agreement than do easier ones. The BESS is valid to detect balance deficits where large differences exist (concussion or fatigue). It may not be valid when differences are more subtle. Conclusions: Overall, the BESS has moderate to good reliability to assess static balance. Low levels of reliability have been reported by some authors. The BESS correlates with other measures of balance using testing devices. The BESS can detect balance deficits in participants with concussion and fatigue. BESS scores increase with age and with ankle instability and external ankle bracing. BESS scores improve after training.

Bell, David R.; Guskiewicz, Kevin M.; Clark, Micheal A.; Padua, Darin A.

2011-01-01

60

Frequency, type and clinical importance of medication history errors at admission to hospital: a systematic review  

PubMed Central

Background Over a quarter of hospital prescribing errors are attributable to incomplete medication histories being obtained at the time of admission. We undertook a systematic review of studies describing the frequency, type and clinical importance of medication history errors at hospital admission. Methods We searched MEDLINE, EMBASE and CINAHL for articles published from 1966 through April 2005 and bibliographies of papers subsequently retrieved from the search. We reviewed all published studies with quantitative results that compared prescription medication histories obtained by physicians at the time of hospital admission with comprehensive medication histories. Three reviewers independently abstracted data on methodologic features and results. Results We identified 22 studies involving a total of 3755 patients (range 33–1053, median 104). Errors in prescription medication histories occurred in up to 67% of cases: 10%– 61% had at least 1 omission error (deletion of a drug used before admission), and 13%– 22% had at least 1 commission error (addition of a drug not used before admission); 60%– 67% had at least 1 omission or commission error. Only 5 studies (n = 545 patients) explicitly distinguished between unintentional discrepancies and intentional therapeutic changes through discussions with ordering physicians. These studies found that 27%– 54% of patients had at least 1 medication history error and that 19%– 75% of the discrepancies were unintentional. In 6 of the studies (n = 588 patients), the investigators estimated that 11%–59% of the medication history errors were clinically important. Interpretation Medication history errors at the time of hospital admission are common and potentially clinically important. Improved physician training, accessible community pharmacy databases and closer teamwork between patients, physicians and pharmacists could reduce the frequency of these errors.

Tam, Vincent C.; Knowles, Sandra R.; Cornish, Patricia L.; Fine, Nowell; Marchesano, Romina; Etchells, Edward E.

2005-01-01

61

Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities  

SciTech Connect

Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

Auflick, Jack L.

1999-04-21

62

Ain't Behavin': Forecast Errors and Measurement Errors in Early GNP Estimates  

Microsoft Academic Search

Revisions of the early GNP estimates may contain elements of measurement errors as well as forecast errors. These types of error behave differently but need to satisfy a common set of criteria for well-behavedness. This article tests these criteria for U.S. GNP revisions. The tests are similar to tests of rationality and are based on the generalized method of moments

Knut Anton Mork

1987-01-01

63

Estimation of model error variances during data assimilation  

NASA Astrophysics Data System (ADS)

Data assimilation is all about understanding the error characteristics of the data and models that are used in the assimilation process. Reliable error estimates are needed to implement observational quality control, bias correction of observations and model fields, and intelligent data selection. Meaningful covariance specifications are obviously required for the analysis as well, since the impact of any single observation strongly depends on the assumed structure of the background errors. Operational atmospheric data assimilation systems still rely primarily on climatological background error covariances. To obtain error estimates that reflect both the character of the flow and the current state of the observing system, it is necessary to solve three problems: (1) how to account for the short-term evolution of errors in the initial conditions; (2) how to estimate the additional component of error caused by model defects; and (3) how to compute the error reduction in the analysis due to observational information. Various approaches are now available that provide approximate solutions to the first and third of these problems. However, the useful accuracy of these solutions very much depends on the size and character of the model errors and the ability to account for them. Model errors represent the real-world forcing of the error evolution in a data assimilation system. Clearly, meaningful model error estimates and/or statistics must be based on information external to the model itself. The most obvious information source is observational, and since the volume of available geophysical data is growing rapidly, there is some hope that a purely statistical approach to model error estimation can be viable. This requires that the observation errors themselves are well understood and quantifiable. We will discuss some of these challenges and present a new sequential scheme for estimating model error variances from observations in the context of an atmospheric data assimilation system.

Dee, D.

2003-04-01

64

Deconvolution Estimation in Measurement Error Models: The R Package decon.  

PubMed

Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139

Wang, Xiao-Feng; Wang, Bin

2011-03-01

65

Error Estimation for Reduced-Order Models of Dynamical Systems  

Microsoft Academic Search

The use of reduced-order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors by a combination

Chris Homescu; Linda R. Petzold; Radu Serban

2007-01-01

66

Impact of error estimation on feature selection  

Microsoft Academic Search

Abstract Given a large set of potential features, it is usually necessary to find a small subset with which to classify. The task of finding an optimal feature set is inherently combinatoric and therefore suboptimal algorithms are typically used to find feature sets. If feature selection is based directly on classification error, then a feature-selection algorithm must base its decision

Chao Sima; Sanju Attoor; Ulisses Braga-neto; James Lowey; Edward Suh; Edward R. Dougherty

2005-01-01

67

On Error Correction Models: Specification, Interpretation, Estimation  

Microsoft Academic Search

Error Correction Models (ECMs) have proved a popular organizing principle in applied econometrics, despite the lack of consensus as to exactly what constitutes their defining characteristic, and the rather limited role that has been given to economic theory by their proponents. This paper uses a historical survey of the evolution of ECMs to explain the alternative specifications and interpretations and

George Alogoskoufis; Ron Smith

1991-01-01

68

Projecting the standard error of the Kaplan-Meier estimator.  

PubMed

Clinical studies in which a major objective is to produce Kaplan-Meier estimates of survival probabilities should be designed to produce those estimates with a desired prespecified precision as measured by their standard errors. By considering the Peto and Greenwood formulae for the estimated standard error of the Kaplan-Meier estimate and replacing their constituents with expected values based on the study's design parameters, formulae for projected standard errors can be produced. These formulae are shown, through simulations, to be quite accurate. PMID:11439423

Cantor, A B

2001-07-30

69

Systematic Biases in Human Heading Estimation  

PubMed Central

Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard Bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.

Cuturi, Luigi F.; MacNeilage, Paul R.

2013-01-01

70

A Run-time Estimate Method of Measurement Error Variance for Kalman Estimator  

Microsoft Academic Search

In this paper, an estimate method of control system measurement error variance for Kalman estimator at run-time is proposed. In general, control system's measurement error variance is measured by some measuring instruments or tuned after implementation of Kalman estimator, but the precise measuring the control system measurement error variance is too hard. Moreover, even though we could tune the measurement

Jae-gu Kang; Bum-jae You

2007-01-01

71

Systematic and random error in an extended-range forecasting experiment  

SciTech Connect

The Canadian Climate Centre's low-resolution general circulation model was used to perform dynamical extended-range forecast consisting of a series of six monthly forecasts for each of the eight Januarys form 1979 to 1986. Results are presented in terms of the 500-mb height field for the mean January systematic and for random error and the efficacy of the January mean forecast. The evolution of error as a function of time and spatial scale was investigated. The equations governing the growth of systematic and random error were derived and evaluated. The mean January systematic error in the 500-mb height field was modestand is generally in non-zonal structures. Systematic-error variance was about an order of magnitude smaller than random-error variance and was concentrated in the larger scales of the flow. The January mean anomally correlation indicated marginal forecast skill and some connection between the spread of the forecasts and the skill of the average forecast. The budget equation for random error indicated that the interaction between it and the systematic error was small enough that the removal of systematic error by substraction appeared to be plausible. The nonlinear barotropic source-sink term dominated random error growth early in the forecast, while nonlinear barotropic generation dominated the random error growth otherwise. 18 refs. 15 figs.

Boer, G.J. (Canadian Climate Centre, Ontario (Canada))

1993-01-01

72

Results and Error Estimates from GRACE Forward Modeling over Antarctica  

NASA Astrophysics Data System (ADS)

Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

Bonin, Jennifer; Chambers, Don

2013-04-01

73

Improving SMOS retrieved salinity: characterization of systematic errors in reconstructed and modelled brightness temperature images  

NASA Astrophysics Data System (ADS)

The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp boundaries such as the Sky-Earth boundary. Data acquired over the Ocean rather than over Land are prefered to characterize such errors because the variability of the emissivity sensed over the oceanic domain is an order of magnitude smaller than over land. Nevertheless, characterizing such errors over the Ocean is not a trivial task. Even if the natural variability is small, it is larger than the errors to be characterized and the characterization strategy must account for it otherwise the estimated patterns will unfortunately vary significantly with the selected dataset. The communication will present results on a systematic error characterization methodology allowing stable error pattern estimates. Particular focus will be given to the critical data selection strategy and the analysis of the X- and Y-pol patterns obtained over a wide range of SMOS subdatasets. Impact of some image reconstruction options will be evaluated. It will be shown how the methodology is also an interesting tool to diagnose specific error sources. Criticality of accurate description of Faraday rotation effects will be evidenced and latest results about the possibility to infer such information from full Stokes vector will be presented.

Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.

2012-04-01

74

Error Propagation in Software Measurement and Estimation  

Microsoft Academic Search

Generically speaking, software measurement and estimation require the applica- tion of an algorithm to one or more input variables (measures), in order to pro- vide one or more output variables (estimates, or metrics) for effort, cost, time, quality or other aspects of the software being developed. Regardless of the esti- mation model (algorithm) being used, practitioners must face the uncertainty

Luca Santillo

75

Formal Estimation of Errors in Computed Absolute Interaction Energies of Protein-ligand Complexes  

PubMed Central

A largely unsolved problem in computational biochemistry is the accurate prediction of binding affinities of small ligands to protein receptors. We present a detailed analysis of the systematic and random errors present in computational methods through the use of error probability density functions, specifically for computed interaction energies between chemical fragments comprising a protein-ligand complex. An HIV-II protease crystal structure with a bound ligand (indinavir) was chosen as a model protein-ligand complex. The complex was decomposed into twenty-one (21) interacting fragment pairs, which were studied using a number of computational methods. The chemically accurate complete basis set coupled cluster theory (CCSD(T)/CBS) interaction energies were used as reference values to generate our error estimates. In our analysis we observed significant systematic and random errors in most methods, which was surprising especially for parameterized classical and semiempirical quantum mechanical calculations. After propagating these fragment-based error estimates over the entire protein-ligand complex, our total error estimates for many methods are large compared to the experimentally determined free energy of binding. Thus, we conclude that statistical error analysis is a necessary addition to any scoring function attempting to produce reliable binding affinity predictions.

Faver, John C.; Benson, Mark L.; He, Xiao; Roberts, Benjamin P.; Wang, Bing; Marshall, Michael S.; Kennedy, Matthew R.; Sherrill, C. David; Merz, Kenneth M.

2011-01-01

76

Iraq War mortality estimates: A systematic review  

PubMed Central

Background In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review the studies, reports and counts on Iraqi deaths since the start of the war and assessed their methodological quality and results. Methods We performed a systematic search of 15 electronic databases from inception to January 2008. In addition, we conducted a non-structured search of 3 other databases, reviewed study reference lists and contacted subject matter experts. We included studies that provided estimates of Iraqi deaths based on primary research over a reported period of time since the invasion. We excluded studies that summarized mortality estimates and combined non-fatal injuries and also studies of specific sub-populations, e.g. under-5 mortality. We calculated crude and cause-specific mortality rates attributable to violence and average deaths per day for each study, where not already provided. Results Thirteen studies met the eligibility criteria. The studies used a wide range of methodologies, varying from sentinel-data collection to population-based surveys. Studies assessed as the highest quality, those using population-based methods, yielded the highest estimates. Average deaths per day ranged from 48 to 759. The cause-specific mortality rates attributable to violence ranged from 0.64 to 10.25 per 1,000 per year. Conclusion Our review indicates that, despite varying estimates, the mortality burden of the war and its sequelae on Iraq is large. The use of established epidemiological methods is rare. This review illustrates the pressing need to promote sound epidemiologic approaches to determining mortality estimates and to establish guidelines for policy-makers, the media and the public on how to interpret these estimates.

Tapp, Christine; Burkle, Frederick M; Wilson, Kumanan; Takaro, Tim; Guyatt, Gordon H; Amad, Hani; Mills, Edward J

2008-01-01

77

COMPARING DIFFERENT GPS DATA PROCESSING TECHNIQUES FOR MODELLING RESIDUAL SYSTEMATIC ERRORS  

Microsoft Academic Search

In the case of traditional GPS data processing algorithms, systematic errors in GPS measurements cannot be eliminated completely, nor accounted for satisfactorily. These systematic errors can have a significant effect on both the ambiguity resolution process and the GPS positioning results. Hence this is a potentially critical problem for high precision GPS positioning applications. It is therefore necessary to develop

Chalermchon Satirapod; Jinling Wang; Chris Rizos

78

Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach  

NASA Astrophysics Data System (ADS)

An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

Bähr, Hermann; Hanssen, Ramon F.

2012-12-01

79

Application of Bayesian Systematic Error Correction to Kepler Photometry  

NASA Astrophysics Data System (ADS)

In a companion talk (Jenkins et al.), we present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of intrinsically quiet and highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. In this poster, we illustrate the concept in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through June 2010. We define the correlation function between normalized, mean-removed light curves and select a subset of highly correlated stars. This ensemble of light curves can then be combined with ancillary engineering data and image motion polynomials to form a design matrix from which the principal components are extracted by reduced-rank SVD decomposition. MAP is then represented in the resulting orthonormal basis, and applied to the set of all light curves. We show that the correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. We then show the benefits of MAP applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability. After high-pass filtering the MAP output, we show that MAP does not increase noise on transit time scales, compared to LS. We conclude with a discussion of current work selecting input vectors for the design matrix, representing and numerically solving MAP for non-Gaussian probability distribution functions (PDFs), and suppressing high-frequency noise injection with Lagrange multipliers. Funding for this mission is provided by NASA, Science Mission Directorate.

Van Cleve, Jeffrey E.; Jenkins, J. M.; Twicken, J. D.; Smith, J. C.; Fanelli, M. N.

2011-01-01

80

A posteriori error estimate for the mixed finite element method  

Microsoft Academic Search

A computable error bound for mixed nite element methods is established in the model case of the Poisson{problem to control the error in the H(div,)L2(){norm. The reliable and ecient a posteriori error estimate applies, e.g., to Raviart{Thomas, Brezzi-Douglas-Marini, and Brezzi-Douglas- Fortin-Marini elements. 1. Mixed method for the Poisson problem Mixed nite element methods are well-established in the numerical treatment of

Carsten Carstensen

1997-01-01

81

Nonparametric Item Response Curve Estimation with Correction for Measurement Error  

ERIC Educational Resources Information Center

Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

Guo, Hongwen; Sinharay, Sandip

2011-01-01

82

Bootstrap Estimates of Standard Errors in Generalizability Theory  

Microsoft Academic Search

Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures (which extend the work of Wiley), as well as a proposed set of rules for

Ye Tong; Robert L. Brennan

2007-01-01

83

Instrumental variable estimation in a probit measurement error model  

Microsoft Academic Search

Probit regression is studied when normally distributed covariates are subject to normally distributed measurement errors. Under the assumption that surrogate instrumental variables are available, the parameters in the probit model are shown to be identified. The maximum likelihood estimator and an easily computed two-stage estimator are derived and studied. The two-stage estimator is shown to be asymptotically efficient. Simulation results

J. S. Buzas; L. A. Stefanski

1996-01-01

84

Using numerical weather prediction errors to estimate aerosol heating  

Microsoft Academic Search

The total real response of the atmosphere to aerosols can be predicted by examining numerical models. In this study, the moderate resolution imaging spectroradiometer (MODIS) aerosol optical thickness (AOT) is correlated to model temperature errors to estimate this response for Israel and for Italy. Significant correlations between aerosols and atmospheric numerical model temperature errors are presented. Two main results were

I. Carmona; Y. J. Kaufman; P. Alpert

2008-01-01

85

Energy norm a posteriori error estimation for discontinuous Galerkin methods  

Microsoft Academic Search

In this paper we present a residual-based a posteriori error estimate of a natural mesh dependent energy norm of the error in a family of discontinuous Galerkin approximations of elliptic problems. The theory is developed for an elliptic model problem in two and three spatial dimensions and general nonconvex polygonal domains are allowed. We also present some illustrating numerical examples.

Roland Becker; Peter Hansbo; Mats G. Larson

2003-01-01

86

Estimation of Radar-Rainfall Error Spatial Correlation  

NASA Astrophysics Data System (ADS)

The authors present a study of a theoretical framework to estimate the radar-rainfall error spatial correlation using high density rain gauge networks and high quality data. The error is defined as the difference between the radar estimate and the true areal rainfall. Based on the framework of second-order rainfall field characterization, the authors propose a method for error spatial correlation estimation which is an extension of the error variance separation that uses radar data at the rain gauge locations and accounts for gauge spatial representativeness error. To assess the performance of the method, the authors carry out a Monte Carlo simulation experiment, where the method is applied on simulated radar-rainfall fields with known error spatial correlation structure. The results show that the method performs very well. The authors also demonstrate the necessity to consider the gauge representativeness error while estimating the error correlation structure. This requires information from a very dense network, limiting applicability of the approach. To illustrate practical value of the method the authors applied it to the NEXRAD Hourly Digital Product (HDP) and Iowa Daily Erosion Project's rainfall products. The corresponding dense rain gauge networks are Oklahoma Micronet and the IIHR's network around Iowa City, Iowa.

Mandapaka, P. V.; Krajewski, W. F.; Ciach, G.; Villarini, G.

2006-12-01

87

Estimation of Dynamic Models with Error Components  

Microsoft Academic Search

Observations on N cross-section units at T time points are used to estimate a simple statistical model involving an autoregressive process with an additive term specific to the unit. Different assumptions about the initial conditions are (a) initial state fixed, (b) initial state random, (c) the unobserved individual effect independent of the unobserved dynamic process with the initial value fixed,

T. W. Anderson; Cheng Hsiao

1981-01-01

88

Systematic errors in VLF direction-finding of whistler ducts. I  

Microsoft Academic Search

Systematic errors in the determination of the azimuthal bearings of whistler ducts by current VLF direction finders are evaluated. A simple propagation model is presented for determining the relative field strengths of multiply reflected waves incident at a VLF receiver from an ionospheric source. It is used in the calculation of the multipath error and polarization error (where applicable) in

H. J. Strangeways

1980-01-01

89

Error Estimates for Generalized Barycentric Interpolation  

PubMed Central

We prove the optimal convergence estimate for first order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions.

Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

2011-01-01

90

Error Estimates for Generalized Barycentric Interpolation.  

PubMed

We prove the optimal convergence estimate for first order interpolants used in finite element methods based on three major approaches for generalizing barycentric interpolation functions to convex planar polygonal domains. The Wachspress approach explicitly constructs rational functions, the Sibson approach uses Voronoi diagrams on the vertices of the polygon to define the functions, and the Harmonic approach defines the functions as the solution of a PDE. We show that given certain conditions on the geometry of the polygon, each of these constructions can obtain the optimal convergence estimate. In particular, we show that the well-known maximum interior angle condition required for interpolants over triangles is still required for Wachspress functions but not for Sibson functions. PMID:23338826

Gillette, Andrew; Rand, Alexander; Bajaj, Chandrajit

2012-10-01

91

Evaluating concentration estimation errors in ELISA microarray experiments  

PubMed Central

Background Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to estimate a protein's concentration in a sample. Deploying ELISA in a microarray format permits simultaneous estimation of the concentrations of numerous proteins in a small sample. These estimates, however, are uncertain due to processing error and biological variability. Evaluating estimation error is critical to interpreting biological significance and improving the ELISA microarray process. Estimation error evaluation must be automated to realize a reliable high-throughput ELISA microarray system. In this paper, we present a statistical method based on propagation of error to evaluate concentration estimation errors in the ELISA microarray process. Although propagation of error is central to this method and the focus of this paper, it is most effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization, and statistical diagnostics when evaluating ELISA microarray concentration estimation errors. Results We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of concentration estimation errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error. We summarize the results with a simple, three-panel diagnostic visualization featuring a scatterplot of the standard data with logistic standard curve and 95% confidence intervals, an annotated histogram of sample measurements, and a plot of the 95% concentration coefficient of variation, or relative error, as a function of concentration. Conclusions This statistical method should be of value in the rapid evaluation and quality control of high-throughput ELISA microarray analyses. Applying propagation of error to a variety of ELISA microarray concentration estimation models is straightforward. Displaying the results in the three-panel layout succinctly summarizes both the standard and sample data while providing an informative critique of applicability of the fitted model, the uncertainty in concentration estimates, and the quality of both the experiment and the ELISA microarray process.

Daly, Don Simone; White, Amanda M; Varnum, Susan M; Anderson, Kevin K; Zangar, Richard C

2005-01-01

92

Three estimators for the poisson regression model with measurement errors  

Microsoft Academic Search

We consider two consistent estimators for the parameters of the linear predictor in the Poisson regression model, where the\\u000a covariate is measured with errors. The measurement errors are assumed to be normally distributed with known error variance\\u000a ?\\u000a \\u000a u\\u000a \\u000a 2\\u000a . The SQS estimator, based on a conditional mean-variance model, takes the distribution of the latent covariate into account,\\u000a and

Alexander Kukush; Hans Schneeweis; Roland Wolf

2004-01-01

93

Systematic rotation and receiver location error effects on parabolic trough annual performance  

NASA Astrophysics Data System (ADS)

The effects of systematic geometrical design errors and random optical errors on the accuracy and subsequent economic viability of solar parabolic trough concentrating collectors were studied to enable designers to choose and specify necessary design and material constraints. A three-dimensional numerical model of a parabolic trough was analyzed with the inclusion of errors of pointing and mechanical deformation, and data from a typical meteorological year. System errors determined as percentage standard deviations provided the range of a study for systematic rotation and receiver location errors. The two types of errors were found to produce compounded effects. It is concluded that the designer must choose performance levels which take into account existence of errors, must know to what level the errors can be eliminated and at what cost, and should make provisions for monitoring the day-to-day on-line focus of the troughs.

Treadwell, G. W.; Grandjean, N. R.

1981-11-01

94

Error Analysis and Sampling Design for Ocean Flux Estimation  

NASA Astrophysics Data System (ADS)

In this paper we present error analysis and sampling design for estimating flux of heat or other quantities (e.g., nitrate, oxygen) in the ocean using mobile or stationary platforms. Flux estimation requires sampling the current velocity and the flux variable (e.g., temperature for heat flux) along a boundary. When we run autonomous underwater vehicles (AUVs) on a boundary to take spatial samples, the ocean field evolves over time. This non-synoptic sampling leads to an estimation error of the flux. We formulate the estimation error as a function of the spatio-temporal variability of the studied ocean field, the cross-section area of the boundary, the number of deployed vehicles, and the vehicle speed. Based on the error metric, we design AUV sampling strategies for flux estimation. We also compare the flux estimation performance of using AUVs with that of using traditional mooring arrays. As an example, we study heat flux estimation using statistics from Monterey Bay and for various sampling configurations. The sampling requirement is determined by how fast the product of temperature and normal current velocity varies in time and space. We estimate the temporal and spatial scales of temperature and current velocity by measurements from moorings, bottom-mounted stations, ships, and AUVs. It is found that current velocity varies much faster than temperature in both temporal and spatial domains, hence the variability of their product is quite high. The consequence of this variability on heat flux estimation is presented.

Zhang, Y.; Bellingham, J. G.; Davis, R. E.; Chavez, F.

2006-12-01

95

Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors  

NASA Astrophysics Data System (ADS)

This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM.

Attivissimo, F.; Cataldo, A.; Giaquinto, N.; Savino, M.

2012-03-01

96

Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating.  

PubMed

The systematic error for photomechanic methods caused by self-heating induced image expansion when using a digital camera was systematically studied, and a new physical model to explain the mechanism has been proposed and verified. The experimental results showed that the thermal expansion of the camera outer case and lens mount, instead of mechanical components within the camera, were the main reason for image expansion. The corresponding systematic error for both image analysis and fringe analysis based photomechanic methods were analyzed and measured, then error compensation techniques were proposed and verified. PMID:23546150

Ma, Qinwei; Ma, Shaopeng

2013-03-25

97

Inverse halftoning and kernel estimation for error diffusion.  

PubMed

Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into the same binary image. Two projection algorithms, viz., minimum mean square error (MMSE) projection and maximum a posteriori probability (MAP) projection, that differ on the way an inverse quantization step is performed, are developed. Among the filtering and the two projection algorithms, MAP projection provides the best performance for inverse halftoning. Using techniques from adaptive signal processing, we suggest a method for estimating the error diffusion kernel from the given halftone. This means that the projection algorithms can be applied in the inverse halftoning of any error-diffused image without requiring any a priori information on the error diffusion kernel. It is shown that the kernel estimation algorithm combined with MAP projection provide the same performance in inverse halftoning compared to the case where the error diffusion kernel is known. PMID:18289997

Wong, P W

1995-01-01

98

On the observational estimation of thermal instrumental and refractional errors.  

NASA Astrophysics Data System (ADS)

From astronomical observations, level readings and temperature determinations with a transit instrument at the Pulkovo observatory, an estimation is obtained of some instrumental and refraction errors, caused by an inhomogeneous thermal field around the instrument. The contribution of abnormal refraction ?Vz on the time correction depending on the inclination of the atmospheric level at the moment of star transit is estimated.

Gorshkov, V.; Shcherbakova, N.

99

Stop the rise in nursing errors--systematically.  

PubMed

The number of reported nursing errors in hospitals has increased in each of the past 5 years. Information technology can help nurses make better care decisions in today's harried hospital environment. PMID:15127464

Simpson, R L

2000-11-01

100

MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS  

SciTech Connect

Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.

R. ESTEP; ET AL

2000-06-01

101

Multiscale Systematic Error Correction via Wavelet-Based Band Splitting and Bayesian Error Modeling in Kepler Light Curves  

NASA Astrophysics Data System (ADS)

Kepler photometric data contain significant systematic and stochastic errors as they come from the Kepler Spacecraft. The main cause for the systematic errors are changes in the photometer focus due to thermal changes in the instrument, and also residual spacecraft pointing errors. It is the main purpose of the Presearch-Data-Conditioning (PDC) module of the Kepler Science processing pipeline to remove these systematic errors from the light curves. While PDC has recently seen a dramatic performance improvement by means of a Bayesian approach to systematic error correction and improved discontinuity correction, there is still room for improvement. One problem of the current (Kepler 8.1) implementation of PDC is that injection of high frequency noise can be observed in some light curves. Although this high frequency noise does not negatively impact the general cotrending, an increased noise level can make detection of planet transits or other astrophysical signals more difficult. The origin of this noise-injection is that high frequency components of light curves sometimes get included into detrending basis vectors characterizing long term trends. Similarly, small scale features like edges can sometimes get included in basis vectors which otherwise describe low frequency trends. As a side effect to removing the trends, detrending with these basis vectors can then also mistakenly introduce these small scale features into the light curves. A solution to this problem is to perform a separation of scales, such that small scale features and large scale features are described by different basis vectors. We present our new multiscale approach that employs wavelet-based band splitting to decompose small scale from large scale features in the light curves. The PDC Bayesian detrending can then be performed on each band individually to correct small and large scale systematics independently. Funding for the Kepler Mission is provided by the NASA Science Mission Directorate.

Stumpe, Martin C.; Smith, J. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

102

Comprehensive Error Estimates for Geophysical Retrievals from Microwave Radiometers  

NASA Astrophysics Data System (ADS)

Currently, more than 20 years of consistently-processed and carefully intercalibrated data are available from microwave (MW) radiometers. Both atmospheric and oceanic products can be retrieved from these measurements, including ocean surface wind speed, sea surface temperature, total precipitable water, cloud water, rain, and deep-layer averages of atmospheric temperature. The product retrievals are based on a radiative transfer model (RTM) for the surface and intervening atmosphere. Thus, the accuracy of the retrieved products depends both on the accuracy of the RTM, the accuracy of the measured brightness temperatures that serve as inputs to the retrieval algorithm, and on the accuracy of any ancillary data used to adjust for unmeasured geophysical conditions. In addition, for products that are averages over time or space, sampling error can become important. We are developing comprehensive systems to assign errors to each geophysical retrieval. These estimates are based on both a formal error analysis that begins with estimates of the error in the inputs to the retrieval algorithms, and on empirical comparisons of retrieved parameters with other estimates of the same parameter from different measurement systems. For deep-layer temperature soundings, we have developed a Monte-Carlo-based analysis that provides error estimates on all relevant space and time scales. A similar system is under development for data from microwave imagers.

Wentz, F. J.; Mears, C. A.; Hilburn, K. A.; Smith, D. K.

2010-12-01

103

Error estimates for Gaussian quadratures of analytic functions  

NASA Astrophysics Data System (ADS)

For analytic functions the remainder term of Gaussian quadrature formula and its Kronrod extension can be represented as a contour integral with a complex kernel. We study these kernels on elliptic contours with foci at the points ±1 and the sum of semi-axes [varrho]>1 for the Chebyshev weight functions of the first, second and third kind, and derive representation of their difference. Using this representation and following Kronrod's method of obtaining a practical error estimate in numerical integration, we derive new error estimates for Gaussian quadratures.

Milovanovic, Gradimir V.; Spalevic, Miodrag M.; Pranic, Miroslav S.

2009-12-01

104

Error estimates in the measurement of anisotropic magnetic susceptibility  

NASA Astrophysics Data System (ADS)

Hext's (1963) first-order analysis for error estimation in measurements of second-rank tensors is applied to the torque meter and the orthogonal-coil induction instrument, both representatives of the class of instruments that measures differences in magnetic susceptibility. The conventional measurement design for the torque meter is rotatable; an efficient and nearly rotatable design is proposed for the orthogonal-coil instrument. Error estimates are derived for a set of anisotropy parameters, Kmean, H and ?. The applicability of a first-order analysis is discussed and illustrated by simulation studies.

Owens, W. H.

2000-08-01

105

Tolerable systematic errors in Really Large Hadron Collider dipoles  

Microsoft Academic Search

Maximum allowable systematic harmonics for arc dipoles in a Really Large Hadron Collider are derived. The possibility of half cell lengths much greater than 100 meters is justified. A convenient analytical model evaluating horizontal tune shifts is developed, and tested against a sample high field collider.

S. Peggs

1996-01-01

106

A systematic error analysis of robotic manipulators: application to a high performance medical robot  

Microsoft Academic Search

A systematic methodology to calculate the end-effector position and orientation errors of a robotic manipulator is presented. The method treats the physical error sources in a unified manner during the system's design so that the effect they have on the end-effector positioning accuracy can be compared and the dominant sources identified. Based on this methodology, a computer program has been

C. Mavroidis; S. Dubowsky; P. Drouet; J. Hintersteiner; J. Flanz

1997-01-01

107

IMRT optimization including random and systematic geometric errors based on the expectation of TCP and NTCP  

Microsoft Academic Search

The purpose of this work was the development of a probabilistic planning method with biological cost functions that does not require the definition of margins. Geometrical uncertainties were integrated in tumor control probability (TCP) and normal tissue complication probability (NTCP) objective functions for inverse planning. For efficiency reasons random errors were included by blurring the dose distribution and systematic errors

Marnix G. Witte; Joris van der Geer; Christoph Schneider; Joos V. Lebesque; Markus Alber; Marcel van Herk

2007-01-01

108

Systematic Errors in an Isoperibol Solution Calorimeter Measured with Standard Reference Reactions.  

National Technical Information Service (NTIS)

Systematic errors in an isoperibol calorimeter of a widely-used design, amounting to about 0.5 percent of the endothermic enthalpy of solution of SRM 1655 (KCl) in H2O, were found to be the result of errors in heat leak corrections due to inadequate stirr...

M. V. Kilday

1980-01-01

109

Changing Beliefs and Systematic Rational Forecast Errors with Evidence from Foreign Exchange  

Microsoft Academic Search

This paper addresses a new interpretation to the appearance, in the early 1980s, of systematic errors in forecasting the dollar exchange rate. Following a change in the process of a fundamental variable, market participants revise their beliefs about the process using Bayes' Rule. Since the market does not immediately recognize the change, forecast errors are, on average, wrong during a

Karen K. Lewis

1989-01-01

110

A SYSTEMATIC LITERATURE REVIEW TO IDENTIFY AND CLASSIFY SOFTWARE REQUIREMENT ERRORS Technical Report MSU-071207  

Microsoft Academic Search

Most software quality research has focused on identifying faults (i.e. information is incorrectly recorded in an artifact). Because software still exhibits incorrect behavior, a different approach is needed. This paper presents a systematic literature review to understand whether using information about the source of a fault (i.e. and error) can be helpful. Once the usefulness of errors is established, then

Gursimran Singh Walia; Jeffrey C. Carver

111

Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.  

PubMed

High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price. PMID:23845182

Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

2013-07-11

112

A methodology for systematic geometric error compensation in five-axis machine tools  

Microsoft Academic Search

To enhance the accuracy, an efficient methodology was developed and described for systematic geometric error correction and\\u000a their compensation in five-axis machine tools. The methodology is capable of compensating the overall effect of all position-dependent\\u000a and position-independent errors which contribute to volumetric workspace. It was implemented on a five-axis grinding machine\\u000a for error compensation and for the check of its

Abdul Wahid Khan; Wuyi Chen

2011-01-01

113

Error propagation and scaling for tropical forest biomass estimates.  

PubMed Central

The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass.

Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

2004-01-01

114

Error Estimation for the Linearized Auto-Localization Algorithm  

PubMed Central

The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter ? is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

Guevara, Jorge; Jimenez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

2012-01-01

115

A precise error bound for quantum phase estimation.  

PubMed

Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers. PMID:21573006

Chappell, James M; Lohe, Max A; von Smekal, Lorenz; Iqbal, Azhar; Abbott, Derek

2011-05-10

116

Development of an integrated system for estimating human error probabilities  

SciTech Connect

This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). This project had as its main objective the development of a Human Reliability Analysis (HRA), knowledge-based expert system that would provide probabilistic estimates for potential human errors within various risk assessments, safety analysis reports, and hazard assessments. HRA identifies where human errors are most likely, estimates the error rate for individual tasks, and highlights the most beneficial areas for system improvements. This project accomplished three major tasks. First, several prominent HRA techniques and associated databases were collected and translated into an electronic format. Next, the project started a knowledge engineering phase where the expertise, i.e., the procedural rules and data, were extracted from those techniques and compiled into various modules. Finally, these modules, rules, and data were combined into a nearly complete HRA expert system.

Auflick, J.L.; Hahn, H.A.; Morzinski, J.A.

1998-12-01

117

DtaRefinery, a Software Tool for Elimination of Systematic Errors from Parent Ion Mass Measurements in Tandem Mass Spectra Data Sets*  

PubMed Central

Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and high throughput MS/MS fragmentation have become widely available in recent years, allowing for significantly better discrimination between true and false MS/MS peptide identifications by the application of a relatively narrow window for maximum allowable deviations of measured parent ion masses. To fully gain the advantage of highly accurate parent ion mass measurements, it is important to limit systematic mass measurement errors. Based on our previous studies of systematic biases in mass measurement errors, here, we have designed an algorithm and software tool that eliminates the systematic errors from the peptide ion masses in MS/MS data. We demonstrate that the elimination of the systematic mass measurement errors allows for the use of tighter criteria on the deviation of measured mass from theoretical monoisotopic peptide mass, resulting in a reduction of both false discovery and false negative rates of peptide identification. A software implementation of this algorithm called DtaRefinery reads a set of fragmentation spectra, searches for MS/MS peptide identifications using a FASTA file containing expected protein sequences, fits a regression model that can estimate systematic errors, and then corrects the parent ion mass entries by removing the estimated systematic error components. The output is a new file with fragmentation spectra with updated parent ion masses. The software is freely available.

Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2010-01-01

118

Data Errors and an Error Estimation for Ill-Posed Problems  

Microsoft Academic Search

In this paper we shall discuss the problem how to use a priori information for constructing regularizing algorithms and error estimation while solving ill-posed problems. We shall consider the following types of a priori information: (1) a compactness of a set of solutions; (2) a sourcewise representation of a solution with a compact operator.

A. G. Yagola; A. S. Leonov; V. N. Titarenko

2002-01-01

119

A posteriori error estimation and adaptive meshing using error in constitutive relation  

Microsoft Academic Search

The paper presents a method to control the quality of finite element solutions error in the constitutive relation. An a posteriori estimator is built up. Its construction is general and gives quantitative results about the accuracy of the solution. Both problems of control of quality and mesh optimisation are also discussed. Several examples are presented. A method used to compute

J.-F. Remacle; P. Dular; A. Genon; W. Legros

1996-01-01

120

Error Estimates for the Approximation of the Effective Hamiltonian  

SciTech Connect

We study approximation schemes for the cell problem arising in homogenization of Hamilton-Jacobi equations. We prove several error estimates concerning the rate of convergence of the approximation scheme to the effective Hamiltonian, both in the optimal control setting and as well as in the calculus of variations setting.

Camilli, Fabio [Univ. dell'Aquila, Dip. di Matematica Pura e Applicata (Italy)], E-mail: camilli@ing.univaq.it; Capuzzo Dolcetta, Italo [Univ. di Roma 'La Sapienza', Dip. di Matematica (Italy)], E-mail: capuzzo@mat.uniroma1.it; Gomes, Diogo A. [Instituto Superior Tecnico, Departamento de Matematica (Portugal)], E-mail: dgomes@math.ist.utl.pt

2008-02-15

121

Condition and Error Estimates in Numerical Matrix Computations  

SciTech Connect

This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

Konstantinov, M. M. [University of Architecture, Civil Engineering and Geodesy, 1046 Sofia (Bulgaria); Petkov, P. H. [Technical University of Sofia, 1000 Sofia (Bulgaria)

2008-10-30

122

A systematic study of source error in source mask optimization  

NASA Astrophysics Data System (ADS)

Source Mask Optimization (SMO) technique is an advanced RET with the goal of extending optical lithography lifetime by enabling low k1 imaging [1,2]. Most of the literature concerning SMO has so far focused on PV (process variation) band, MEEF and PW (process window) aspects to judge the performance of the optimization as in traditional OPC [3]. In analogy to MEEF impact for low k1 imaging we investigate the source error impact as SMO sources can have rather complicated forms depending on the degree of freedom allowed during optimization. For this study we use Tachyon SMO tool on a 22nm metal design test case. A free form and parametric source solutions are obtained using MEEF and PW requirements as main criteria. For each type of source, a source perturbation is introduced to study the impact on lithography performance. Based on the findings we conclude on the choice of freeform or parametric as a source and the importance of source error in the optimization process.

Alleaume, C.; Yesilada, E.; Farys, V.; Depre, L.; Arnoux, V.; Li, Zhipan; Trouiller, Y.; Serebriakov, A.

2010-09-01

123

Background error covariance estimation for atmospheric CO2 data assimilation  

NASA Astrophysics Data System (ADS)

any data assimilation framework, the background error covariance statistics play the critical role of filtering the observed information and determining the quality of the analysis. For atmospheric CO2 data assimilation, however, the background errors cannot be prescribed via traditional forecast or ensemble-based techniques as these fail to account for the uncertainties in the carbon emissions and uptake, or for the errors associated with the CO2 transport model. We propose an approach where the differences between two modeled CO2 concentration fields, based on different but plausible CO2 flux distributions and atmospheric transport models, are used as a proxy for the statistics of the background errors. The resulting error statistics: (1) vary regionally and seasonally to better capture the uncertainty in the background CO2 field, and (2) have a positive impact on the analysis estimates by allowing observations to adjust predictions over large areas. A state-of-the-art four-dimensional variational (4D-VAR) system developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) is used to illustrate the impact of the proposed approach for characterizing background error statistics on atmospheric CO2 concentration estimates. Observations from the Greenhouse gases Observing SATellite "IBUKI" (GOSAT) are assimilated into the ECMWF 4D-VAR system along with meteorological variables, using both the new error statistics and those based on a traditional forecast-based technique. Evaluation of the four-dimensional CO2 fields against independent CO2 observations confirms that the performance of the data assimilation system improves substantially in the summer, when significant variability and uncertainty in the fluxes are present.

Chatterjee, Abhishek; Engelen, Richard J.; Kawa, Stephan R.; Sweeney, Colm; Michalak, Anna M.

2013-09-01

124

Error estimates and specification parameters for functional renormalization  

NASA Astrophysics Data System (ADS)

We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS-BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

2013-07-01

125

Impact of instrument error on the estimated prevalence of overweight and obesity in population-based surveys  

PubMed Central

Background The basis for this study is the fact that instrument error increases the variance of the distribution of body mass index (BMI). Combined with a defined cut-off value this may impact upon the estimated proportion of overweight and obesity. It is important to ensure high quality surveillance data in order to follow trends of estimated prevalence of overweight and obesity. The purpose of the study was to assess the impact of instrument error, due to uncalibrated scales and stadiometers, on prevalence estimates of overweight and obesity. Methods Anthropometric measurements from a nationally representative sample were used; the Norwegian Child Growth study (NCG) of 3474 children. Each of the 127 participating schools received a reference weight and a reference length to determine the correction value. Correction value corresponds to instrument error and is the difference between the true value and the measured, uncorrected weight and height at local scales and stadiometers. Simulations were used to determine the expected implications of instrument errors. To systematically investigate this, the coefficient of variation (CV) of instrument error was used in the simulations and was increased successively. Results Simulations showed that the estimated prevalence of overweight and obesity increased systematically with the size of instrument error when the mean instrument error was zero. The estimated prevalence was 16.4% with no instrument error and was, on average, overestimated by 0.5 percentage points based on observed variance of instrument error from the NCG-study. Further, the estimated prevalence was 16.7% with 1% CV of instrument error, and increased to 17.8%, 19.5% and 21.6% with 2%, 3% and 4% CV of instrument error, respectively. Conclusions Failure to calibrate measuring instruments is likely to lead to overestimation of the prevalence of overweight and obesity in population-based surveys.

2013-01-01

126

SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.  

SciTech Connect

Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

2007-08-25

127

Systematic error reduction: non-tilted reference beam method for long trace profiler  

NASA Astrophysics Data System (ADS)

Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: 1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, 2) independent slide pitch test by use of not tilted reference beam, 3) non-tilted reference test combined with tilted sample, 4) penta-prism scanning mode without a reference beam correction, 5) non-tilted reference using a second optical head, and 6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

Qian, Shinan; Qian, Kun; Hong, Yiling; Sheng, Liusi; Ho, Tonglin; Takacs, Peter

2007-09-01

128

Discretization error estimation and exact solution generation using the method of nearby problems.  

SciTech Connect

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)

2011-10-01

129

Drug treatment of inborn errors of metabolism: a systematic review  

PubMed Central

Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field.

Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed

2013-01-01

130

Systematic and random errors between collocated satellite ice water path observations  

NASA Astrophysics Data System (ADS)

remains large disagreement between ice-water path (IWP) in observational data sets, largely because the sensors observe different parts of the ice particle size distribution. A detailed comparison of retrieved IWP from satellite observations in the Tropics (±30° latitude) in 2007 was made using collocated measurements. The radio detection and ranging(radar)/light detection and ranging (lidar) (DARDAR) IWP data set, based on combined radar/lidar measurements, is used as a reference because it provides arguably the best estimate of the total column IWP. For each data set, usable IWP dynamic ranges are inferred from this comparison. IWP retrievals based on solar reflectance measurements, in the moderate resolution imaging spectroradiometer (MODIS), advanced very high resolution radiometer-based Climate Monitoring Satellite Applications Facility (CMSAF), and Pathfinder Atmospheres-Extended (PATMOS-x) datasets, were found to be correlated with DARDAR over a large IWP range (~20-7000 g m-2). The random errors of the collocated data sets have a close to lognormal distribution, and the combined random error of MODIS and DARDAR is less than a factor of 2, which also sets the upper limit for MODIS alone. In the same way, the upper limit for the random error of all considered data sets is determined. Data sets based on passive microwave measurements, microwave surface and precipitation products system (MSPPS), microwave integrated retrieval system (MiRS), and collocated microwave only (CMO), are largely correlated with DARDAR for IWP values larger than approximately 700 g m-2. The combined uncertainty between these data sets and DARDAR in this range is slightly less MODIS-DARDAR, but the systematic bias is nearly an order of magnitude.

Eliasson, S.; Holl, G.; Buehler, S. A.; Kuhn, T.; Stengel, M.; Iturbide-Sanchez, F.; Johnston, M.

2013-03-01

131

Systematic biases in parameter estimation of binary black-hole mergers  

NASA Astrophysics Data System (ADS)

Parameter estimation of binary black-hole merger events in gravitational-wave data relies on matched-filtering techniques which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing nonspinning numerical-relativity waveforms. We quantify the systematic bias by using a Markov chain Monte Carlo algorithm to sample the posterior distribution function of noise-free data, and compare the offset of the maximum a priori waveform parameters (the bias) to the width of the distribution, which we refer to as the statistical error. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios. These biases grow to be comparable to the statistical errors at high ground-based-instrument signal-to-noise ratios (SNR˜50), but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors, but for astrophysical black hole mass estimates the absolute biases (of at most a few percent) are still fairly small.

Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

2013-05-01

132

Divergent estimation error in portfolio optimization and in linear regression  

NASA Astrophysics Data System (ADS)

The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

Kondor, I.; Varga-Haszonits, I.

2008-08-01

133

Sensor Analytics: Radioactive gas Concentration Estimation and Error Propagation  

SciTech Connect

This paper develops the mathematical statistics of a radioactive gas quantity measurement and associated error propagation. The probabilistic development is a different approach to deriving attenuation equations and offers easy extensions to more complex gas analysis components through simulation. The mathematical development assumes a sequential process of three components; I) the collection of an environmental sample, II) component gas extraction from the sample through the application of gas separation chemistry, and III) the estimation of radioactivity of component gases.

Anderson, Dale N.; Fagan, Deborah K.; Suarez, Reynold; Hayes, James C.; McIntyre, Justin I.

2007-04-15

134

Errors in Estimating Accruals: Implications for Empirical Research  

Microsoft Academic Search

Abstract This paper examines,the impact,of measuring accruals as the change,in successive balance sheet accounts, as opposed to measuring accruals directly from the statement of cash flows. Our primary finding is that studies using a balance sheet approach,to test for earnings management,are potentially contaminated,by measurement,error in accruals estimates. In particular, if the partitioning variable used to indicate the presence of earnings

Daniel W. Collins; Paul Hribar

1999-01-01

135

Effects and removal of systematic errors in crosshole georadar attenuation tomography  

Microsoft Academic Search

We have developed an algorithm that allows crosshole georadar amplitude data contaminated with systematic errors to be tomographically inverted. The effects of the errors, which may due to variable antenna-borehole coupling, the groundwater table, and 3-D heterogeneities in the vicinity of one or more boreholes, are included in a series of transmitter and receiver amplitude-correction factors. Tests with synthetic georadar

Hansruedi Maurer; Martin Musil

2004-01-01

136

Rigorous covariance propagation of geoid errors to geodetic MDT estimates  

NASA Astrophysics Data System (ADS)

The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

2012-04-01

137

Mean-squared error and threshold SNR prediction of maximum-likelihood signal parameter estimation with estimated colored noise covariances  

Microsoft Academic Search

An interval error-based method (MIE) of predicting mean squared error (MSE) performance of maximum-likelihood estimators (MLEs) is extended to the case of signal parameter estimation requiring intermediate estimation of an unknown colored noise covariance matrix; an intermediate step central to adaptive array detection and parameter estimation. The successful application of MIE requires good approximations of two quantities: 1) interval error

Christ D. Richmond

2006-01-01

138

CADNA: a library for estimating round-off error propagation  

NASA Astrophysics Data System (ADS)

The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53?420 No. of bytes in distributed program, including test data, etc.:566?495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2 4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna.J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995.J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233 261.J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377 390.

Jézéquel, Fabienne; Chesneaux, Jean-Marie

2008-06-01

139

Progress in radar data quality control and error covariance estimation  

NASA Astrophysics Data System (ADS)

In Y02, a prototype single-Doppler wind retrieval package was developed for real-time applications of level-II Doppler radar data by the Data Assimilation Group at NSSL and CIMMS/OU. A three-step dealiasing technique and multivariate scheme were designed and incorporated into this package. The package was installed on two-processor PC-based workstation which running Linux operating system to reduce the cost of hardware and software. This system package has been running with realtime level-II data (from KTLX in Oklahoma and from eight radars in New England, see http://gaussian.gcn.ou.edu:8080/cgi-bin/product_ne.pl?KTLX, and http://gaussian.gcn.ou.edu:8080/NewEngland) since June 2002. It produces realtime displays of the retrieved vector winds and makes data files on-line available. The package has been continuously tested and improved since then. A real-time link of Terminal Doppler Wearther Radar (TDWR) data from OKC airport was also established for the retrieval package. A new method of statistical analysis of innovation vector (discrete arrays of observation minus independent analysis values at observation points) was developed and is being used to estimate TDWR radar observation error covariances and retrieval error covariances (for the above package). Some of the detailed techniques developed for radar data quality control and error estimation will be presented at the conference.

Xu, Qin

2003-04-01

140

Iraq War mortality estimates: A systematic review  

Microsoft Academic Search

BACKGROUND: In March 2003, the United States invaded Iraq. The subsequent number, rates, and causes of mortality in Iraq resulting from the war remain unclear, despite intense international attention. Understanding mortality estimates from modern warfare, where the majority of casualties are civilian, is of critical importance for public health and protection afforded under international humanitarian law. We aimed to review

Christine Tapp; Frederick M Burkle Jr; Kumanan Wilson; Tim Takaro; Gordon H Guyatt; Hani Amad; Edward J Mills

2008-01-01

141

Improved soundings and error estimates using AIRS/AMSU data  

NASA Astrophysics Data System (ADS)

AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

Susskind, Joel

2006-06-01

142

Moderate alcohol use and reduced mortality risk: Systematic error in prospective studies  

Microsoft Academic Search

The majority of prospective studies on alcohol use and mortality risk indicates that abstainers are at increased risk of mortality from both all causes and coronary heart disease (CHD). This meta-analysis of 54 published studies tested the extent to which a systematic misclassification error was committed by including as 'abstainers' many people who had reduced or stopped drinking, a phenomenon

Kaye Middleton Fillmore; William C. Kerr; Tim Stockwell; Tanya Chikritzhs; Alan Bostrom

2006-01-01

143

Investigation of systematical overlay errors limiting litho process performance of thick implant resists  

Microsoft Academic Search

Tapered resist profiles have been found to cause a deterimental effect on the overlay measurement capability, affecting lithography processes which utilize thick implant resist. Particularly, for resist thicknesses greater than 1.5 mum, the systematical contribution to the overlay error becomes predominant. In CMOS manufacturing, these resist types are being used mainly for high energy well implants. As design rules progressively

Alexandra G. Grandpierre; Roberto Schiwon; Jens-. Bruch; Christoph Nacke; Uwe P. Schroeder

2004-01-01

144

Systematic error sources in a measurement of G using a cryogenic torsion pendulum  

Microsoft Academic Search

This dissertation attempts to explore and quantify systematic errors that arise in a measurement of G (the gravitational constant from Newton's Law of Gravitation) using a cryogenic torsion pendulum. It begins by exploring the techniques frequently used to measure G with a torsion pendulum, features of the particular method used at UC Irvine, and the motivations behind those features. It

William Daniel Cross

2009-01-01

145

Multi-satellite rainfall sampling error estimates - a comparative study  

NASA Astrophysics Data System (ADS)

This study focus is set on quantifying sampling related uncertainty in the satellite rainfall estimates. We conduct observing system simulation experiment to estimate sampling error for various constellations of Low-Earth orbiting and geostationary satellites. There are two types of microwave instruments currently available: cross track sounders and conical scanners. We evaluate the differences in sampling uncertainty for various satellite constellations that carry instruments of the common type as well as in combination with geostationary observations. A precise orbital model is used to simulate realistic satellite overpasses with orbital shifts taken into account. With this model we resampled rain gauge timeseries to simulate satellites rainfall estimates free of retrieval and calibration errors. We concentrate on two regions, Germany and Benin, areas with different precipitation regimes. Our results show that sampling uncertainty for all satellite constellations does not differ greatly depending on the area despite the differences in local precipitation patterns. Addition of 3 hourly geostationary observations provides equal performance improvement in Germany and Benin, reducing rainfall undersampling by 20-25% of the total rainfall amount. Authors do not find a significant difference in rainfall sampling between conical imager and cross-track sounders.

Itkin, M.; Loew, A.

2012-10-01

146

Error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples  

NASA Astrophysics Data System (ADS)

We have develop a fast method that can give high order error estimates of piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used polynomial annihilation to estimate the smoothness of local regions of arbitrary samples in annihilation stochastic simulations. We compare the error estimation of this method to gaussian process error estimation techniques.

Archibald, Rick

2013-10-01

147

Estimation of the Error in Assessing the Genetically Significant Dose.  

National Technical Information Service (NTIS)

The data required for the calculation of the genetically significant dose (GSD) are subject to errors. The size of the error for the GSD depends on these errors. If, in evaluating the error, the true errors of the individual data are used, the maximum err...

W. Pietzsch

1977-01-01

148

DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets  

SciTech Connect

Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

2009-12-16

149

Convergence and error estimation in free energy calculations using the weighted histogram analysis method  

PubMed Central

The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this paper, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimal allocation of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations.

Zhu, Fangqiang; Hummer, Gerhard

2012-01-01

150

Nuclear power plant fault diagnosis using neural networks with error estimation by series association  

Microsoft Academic Search

The accuracy of the diagnosis obtained from a nuclear power plant fault-diagnostic advisor using neural networks is addressed in this paper in order to ensure the credibility of the diagnosis. A new error estimation scheme called error estimation by series association provides a measure of the accuracy associated with the advisor's diagnoses. This error estimation is performed by a secondary

Keehoon Kim; Eric B. Bartlett

1996-01-01

151

Systematic Errors in Resistivity and IP Data Acquisition: Are We Interpreting the Earth or the Instrument?  

NASA Astrophysics Data System (ADS)

For decades, resistivity and induced polarization (IP) measurements have been important tools for near-surface geophysical investigations. Recently, sophisticated, multi-channel, multi-electrode, acquisition systems have displaced older, simpler, systems allowing collection of large, complex, three-dimensional data series. Generally, these new digital acquisition systems are better than their analog ancestors at dealing with noise from external sources. However, they are prone to a number of systematic errors. Since these errors are non- random and repeatable, the field geophysicist may be blissfully unaware that while his/her field data may be very precise, they may not be particularly accurate. We have begun the second phase of research project to improve our understanding of these types of errors. The objective research is not to indict any particular manufacturer's instrument but to understand the magnitude of systematic errors in typical, modern, data acquisition. One important source of noise, results from the tendency for these systems to both send the source current, and monitor potentials through common multiplexer circuits and along the same cable bundle. Often, the source current is transmitted at hundreds of volts and the potentials measure few tens of millivolts. Thus, even tiny amounts of leakage from the transmitter wires/circuits to the receiver wire/circuits can corrupt or overwhelm the data. For example, in a recent survey, we found that a number of substantial anomalies correlated better to the multi-conductor cable used than to the subsurface. Leakage errors in cables are roughly proportional to the length of the cable and the contact impedance of the electrodes but vary dramatically the construction and type of wire insulation. Polyvinylchloride, (PVC) insulation, the type used in most inexpensive wire and cables, is extremely noisy. Not only does PVC tend to leak current from conductor to conductor, but the leakage currents tend to have large phase shifts/time lags that mimic IP effects. A second source of substantial systematic errors is the tendency of these systems to use the same, simple metal electrodes as current sources for some data and receiver points at other times. Using the electrode as a current source results in the electrode retaining substantial voltage (often hundreds of millivolts) that decays over time. The form of this decay voltage can be fairly complex making it difficult to remove even with long periods of signal averaging. Finally, there are a number of other, smaller but potentially significant systematic errors such as errors due to the limited common-mode rejection of the multi-channel receivers and even leakage of potential from receiver to receiver when electrodes are shared between adjacent measurement channels.

La Brecque, D. J.

2006-12-01

152

A Fourier domain model for estimating astrometry errors due to static and quasi-static optical surface errors  

NASA Astrophysics Data System (ADS)

Context. The wavefront aberrations due to optical surface errors in adaptive optics systems and science instruments can be a significant error source for high precision astrometry. Aims: This report derives formulas for evaluating these errors which may be useful in developing astrometry error budgets and optical surface quality specifications. Methods: A Fourier domain approach is used, and the errors on each optical surface are modeled as "phase screens" with stationary statistics at one or several conjugate ranges from the optical system pupil. Three classes of error are considered: (i) errors in initially calibrating the effects of static surface errors; (ii) the effects of beam translation, or "wander," across optical surfaces due to (for example) instrument boresighting error; and (iii) quasistatic surface errors which change from one observation to the next. Results: For each of these effects, we develop formulas describing the position estimation errors in a single observation of a science field, as well as the differential error between two separate observations. Sample numerical results are presented for the three classes of error, including some sample computations for the Thirty Meter Telescope and the NFIRAOS first-light adaptive optics system.

Ellerbroek, B.

2013-04-01

153

Evolution of model systematic errors in the Tropical Atlantic Basin from coupled climate hindcasts  

NASA Astrophysics Data System (ADS)

Significant systematic errors in the tropical Atlantic Ocean are common in state-of-the-art coupled ocean-atmosphere general circulation models. In this study, a set of ensemble hindcasts from the NCEP coupled forecast system (CFS) is used to examine the initial growth of the coupled model bias. These CFS hindcasts are 9-month integrations starting from perturbed real-time oceanic and atmospheric analyses for 1981-2003. The large number of integrations from a variety of initial states covering all months provides a good opportunity to examine how the model systematic errors grow. The monthly climatologies of ensemble hindcasts from various initial months are compared with both observed and analyzed oceanic and atmospheric datasets. Our analyses show that two error patterns are dominant in the hindcasts. One is the warming of the sea surface temperature (SST) in the southeastern tropical Atlantic Ocean. This error grows faster in boreal summer and fall and peaks in November-December at round 2°C in the open ocean. It is caused by an excessive model surface shortwave radiative flux in this region, especially from boreal summer to fall. The excessive radiative forcing is in turn caused by the CFS inability to reproduce the observed amount of low cloud cover in the southeastern ocean and its seasonal increase. According to a comparison between the seasonal climatologies from the CFS hindcasts and a long-term simulation of the atmospheric model forced with observed SST, the CFS low cloud and radiation errors are inherent to its atmospheric component. On the other hand, the SST error in CFS is a major cause of the model’s southward bias of the intertropical convergence zone (ITCZ) in boreal winter and spring. An analysis of the SST errors of the 6-month ensemble hindcasts by seven coupled models in the Development of a European Multimodel Ensemble System for Seasonal-to-Interannual Prediction project shows that this SST error pattern is common in coupled climate hindcasts. The second error pattern is an excessive deepening of the model thermocline depth to the north of the equator from the western coast toward the central ocean. This error grows fastest in boreal summer. It is forced by an overly strong local anticyclonic surface wind stress curl and is in turn related to the weakened northeast trade winds in summer and fall. The thermocline error in the northwest delays the annual shoaling of the equatorial thermocline in the Gulf of Guinea remotely through the equatorial waveguide.

Huang, Bohua; Hu, Zeng-Zhen; Jha, Bhaskar

2007-06-01

154

Systematic Sampling  

Microsoft Academic Search

This paper gives an account of the results of an investigation into one-dimensional systematic sampling, i.e. the sampling of sequences of quantitative values by the use of sampling points equally spaced along the sequence. New methods, using what are termed partial systematic samples, are evolved for estimating the systematic sampling error from short sections of sequences of completely enumerated numerical

F. Yates

1948-01-01

155

Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation  

SciTech Connect

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

Beckerman, M.; Oblow, E.M.

1988-04-01

156

Evolution of model systematic errors in the Tropical Atlantic Basin from coupled climate hindcasts  

Microsoft Academic Search

Significant systematic errors in the tropical Atlantic Ocean are common in state-of-the-art coupled ocean–atmosphere general\\u000a circulation models. In this study, a set of ensemble hindcasts from the NCEP coupled forecast system (CFS) is used to examine\\u000a the initial growth of the coupled model bias. These CFS hindcasts are 9-month integrations starting from perturbed real-time\\u000a oceanic and atmospheric analyses for 1981–2003.

Bohua Huang; Zeng-Zhen Hu; Bhaskar Jha

2007-01-01

157

Kepler Presearch Data Conditioning II - A Bayesian Approach to Systematic Error Correction  

NASA Astrophysics Data System (ADS)

With the unprecedented photometric precision of the Kepler spacecraft, significant systematic and stochastic errors on transit signal levels are observable in the Kepler photometric data. These errors, which include discontinuities, outliers, systematic trends, and other instrumental signatures, obscure astrophysical signals. The presearch data conditioning (PDC) module of the Kepler data analysis pipeline tries to remove these errors while preserving planet transits and other astrophysically interesting signals. The completely new noise and stellar variability regime observed in Kepler data poses a significant problem to standard cotrending methods. Variable stars are often of particular astrophysical interest, so the preservation of their signals is of significant importance to the astrophysical community. We present a Bayesian maximum a posteriori (MAP) approach, where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set, which is in turn used to establish a range of ""reasonable"" robust fit parameters. These robust fit parameters are then used to generate a Bayesian prior and a Bayesian posterior probability distribution function (PDF) which, when maximized, finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection that commonly afflicts simple least-squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian prior PDFs are generated from fits to the light-curve distributions themselves.

Smith, Jeffrey C.; Stumpe, Martin C.; Van Cleve, Jeffrey E.; Jenkins, Jon M.; Barclay, Thomas S.; Fanelli, Michael N.; Girouard, Forrest R.; Kolodziejczak, Jeffery J.; McCauliff, Sean D.; Morris, Robert L.; Twicken, Joseph D.

2012-09-01

158

A SYSTEMATIC APPROACH TO ESTIMATING UNCERTAINTY IN PRESSURE MEASUREMENT  

Microsoft Academic Search

For any measurement to be meaningful, the result of the measurement must be accompanied with a statement of its uncertainty. The evaluation of uncertainties associated with pressure measurement is sometimes complex but an important task. This paper presents a systematic approach for estimating measurement uncertainty by providing a worked example for the case of pressure measurement by a pneumatic dead-weight

Anil Agarwal

159

Systematic Error in Mechanical Measures of Damage During Four-Point Bending Fatigue of Cortical Bone  

PubMed Central

Accumulation of fatigue microdamage in cortical bone specimens is commonly measured by a modulus or stiffness degradation after normalizing tissue heterogeneity by the initial modulus or stiffness of each specimen measured during a preloading step. In the first experiment, the initial specimen modulus defined using linear elastic beam theory (LEBT) was shown to be nonlinearly dependent on the preload level, which subsequently caused systematic error in the amount and rate of damage accumulation measured by the LEBT modulus degradation. Therefore, the secant modulus is recommended for measurements of the initial specimen modulus during preloading. In the second experiment, different measures of mechanical degradation were directly compared and shown to result in widely varying estimates of damage accumulation during fatigue. After loading to 400,000 cycles, the normalized LEBT modulus decreased by 26% and the creep strain ratio decreased by 58%, but the normalized secant modulus experienced no degradation and histology revealed no significant differences in microcrack density. The LEBT modulus was shown to include the combined effect of both elastic (recovered) and creep (accumulated) strain. Therefore, at minimum, both the secant modulus and creep should be measured throughout a test to most accurately indicate damage accumulation and account for different damage mechanisms. Histology further revealed indentation of tissue adjacent to roller supports, with significant sub-surface damage beneath large indentations, accounting for 22% of the creep strain on average. The indentation of roller supports resulted in inflated measures of the LEBT modulus degradation and creep. The results of this study suggest that investigations of fatigue microdamage in cortical bone should avoid the use of four-point bending unless no other option is possible.

Landrigan, Matthew D.; Roeder, Ryan K.

2009-01-01

160

A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations  

SciTech Connect

An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.

Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

1999-11-03

161

Accurate estimation of phase-shifting error in digital holography  

Microsoft Academic Search

Phase shifting error is a major error source in phase-shifting digital holography. It affects the quality of the reconstructed object and causes errors in its phase and amplitude calculation. This paper presents a simple method to accurately retrieve the actual phase-shifting value used in practical hologram recording. It therefore provides the possibility of completely eliminating the phase-shifting error in digital

Shuqun Zhang

2004-01-01

162

A geometry-based error estimation for cross-ratios  

Microsoft Academic Search

For choosing specific cross-ratios as 2D projective coordinates in various computer vision applications, a reasonable error analysis model is usually required. This investigation adopts the assumption of normal distribution for positioning errors of point features in an image to formulate the error variances of cross-ratios. Based on a geometry-based error analysis, a straightforward way of identifying the cross-ratios with minimum

Jain-shing Liu; Jen-hui Chuang

2002-01-01

163

R -free likelihood-based estimates of errors for phases calculated from atomic models  

Microsoft Academic Search

Reasonable assumptions about the statistical properties of errors in an atomic model lead to the probability distributions for the values of structure-factor phases. These distributions contain some generally unknown parameters reflecting how large the model errors are. These parameters must be determined properly to give realistic estimates of phase errors. Maximum-likelihood- based estimates suggested by Lunin & Urzhumtsev (Acta Cryst.

V. Yu. Lunin; T. P. Skovoroda

1995-01-01

164

Cross-Validation and the Bootstrap: Estimating the Error Rate of a Prediction Rule  

Microsoft Academic Search

A training set of data has been used to construct a rule for predicting future responses. What is the error rate of this rule? The traditional answer to this question is given by cross-validation. The cross-validation estimate of prediction error is nearly unbiased, but can be highly variable. This article discusses bootstrap estimates of prediction error, which can be thought

Bradley Efron; Robert Tibshirani

1995-01-01

165

On decoding (31, 16, 7) quadratic residue code up to its error correcting capacity with bit-error probability estimates  

NASA Astrophysics Data System (ADS)

The quadratic residue codes are a class of the error correcting codes with interesting mathematics. Among them, the (31, 16, 7) quadratic residue code is the code with reducible generator polynomial and three-error-correcting capacity. The algebraic decoding algorithm for the (32, 16, 8) quadratic residue code is developed by Reed et al. (1990). In this paper, a simplified decoding algorithm is proposed. The algorithm uses bit-error probability estimates, which is first developed by Reed MIT Lincoln Laboratory Report (1959), to cancel the third error and then uses the algebraic decoding algorithm mentioned above to correct the remaining two errors. Simulation results show that this modified decoding algorithm slightly reduces the decoding complexity for correcting the third error while maintaining the same BER performance in additive white Gaussian noise (AWGN). Also, the flowchart of the above decoding algorithm is illustrated with Fig. 1.

Lin, Tsung-Ching; Shih, Pei-Yu; Su, Wen-Ku; Truong, Trieu-Kien

2010-08-01

166

An examination of the Southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling  

NASA Astrophysics Data System (ADS)

Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey.

Castle, Robert O.; Brown, Byron W., Jr.; Gilmore, Thomas D.; Mark, Robert K.; Wilson, R. Clark

1983-11-01

167

Systematic and random error in repeated measurements of temporal and distance parameters of gait after stroke  

Microsoft Academic Search

Objective: To obtain intersession estimates of error for temporal and distance (TD) parameters of gait in a sample of stroke patients undertaking inpatient rehabilitation.Design: Thirty-one stroke patients were measured with an instrumented footswitch system (after a median of 46 days poststroke; interquartile range = 26 to 63) walking over a 10-meter distance a total of four times on 3 consecutive

Matthew D. Evans; Patricia A. Goldie; Keith D. Hill

1997-01-01

168

Investigation of systematical overlay errors limiting litho process performance of thick implant resists  

NASA Astrophysics Data System (ADS)

Tapered resist profiles have been found to cause a deterimental effect on the overlay measurement capability, affecting lithography processes which utilize thick implant resist. Particularly, for resist thicknesses greater than 1.5 ?m, the systematical contribution to the overlay error becomes predominant. In CMOS manufacturing, these resist types are being used mainly for high energy well implants. As design rules progressively shrink, the overlay requirements are getting tighter, such that the limits of the process capability are reached. Since the resist thickness cannot be reduced due to the requirements of the implant process, it becomes inevitable to reduce the systematical overlay error for the litho process involving thick resists. The following analysis concentrates on the tapers of overlay marks printed on thick i-line positive resists. Conventionally, overlay between two litho layers is measured from box in box marks with respect to a reference layer where the statistical shift between the boxes is expected to provide the biggest source of residuals. We observed however that an even bigger error could be introduced by an unevenness of the i-line resist tapers, adding asymmetrical chip magnification. The inclination of these tapers depends on the proximity and surface of the surrounding features and stack variations. We show that by adjusting soft and hard bake temperatures and times, tapers can be significantly reduced and thereby the overlay performance was greatly improved.

Grandpierre, Alexandra G.; Schiwon, Roberto; Bruch, Jens-.; Nacke, Christoph; Schroeder, Uwe P.

2004-05-01

169

Surface air temperature simulations by AMIP general circulation models: Volcanic and ENSO signals and systematic errors  

SciTech Connect

Thirty surface air temperature simulations for 1979--88 by 29 atmospheric general circulation models are analyzed and compared with the observations over land. These models were run as part of the Atmospheric Model Intercomparison Project (AMIP). Several simulations showed serious systematic errors, up to 4--5 C, in globally averaged land air temperature. The 16 best simulations gave rather realistic reproductions of the mean climate and seasonal cycle of global land air temperature, with an average error of {minus}0.9 C for the 10-yr period. The general coldness of the model simulations is consistent with previous intercomparison studies. The regional systematic errors showed very large cold biases in areas with topography and permanent ice, which implies a common deficiency in the representation of snow-ice albedo in the diverse models. The SST and sea ice specification of climatology rather than observations at high latitudes for the first three years (1979--81) caused a noticeable drift in the neighboring land air temperature simulations, compared to the rest of the years (1982--88). Unsuccessful simulation of the extreme warm (1981) and cold (1984--85) periods implies that some variations are chaotic or unpredictable, produced by internal atmospheric dynamics and not forced by global SST patterns.

Mao, J.; Robock, A. [Univ. of Maryland, College Park, MD (United States). Dept. of Meteorology

1998-07-01

170

A phase dynamic model of systematic error in simple copying tasks.  

PubMed

A crucial insight into handwriting dynamics is embodied in the idea that stable, robust handwriting movements correspond to attractors of an oscillatory dynamical system. We present a phase dynamic model of visuomotor performance involved in copying simple oriented lines. Our studies on human performance in copying oriented lines revealed a systematic error pattern in orientation of drawn lines, i.e., lines at certain orientation are drawn more accurately than at other values. Furthermore, human subjects exhibit "flips" in direction at certain characteristic orientations. It is argued that this flipping behavior has its roots in the fact that copying process is inherently ambiguous-a line of given orientation may be drawn in two different (mutually opposite) directions producing the same end result. The systematic error patterns seen in human copying performance is probably a result of the attempt of our visuomotor system to cope with this ambiguity and still be able to produce accurate copying movements. The proposed nonlinear phase-dynamic model explains the experimentally observed copying error pattern and also the flipping behavior with remarkable accuracy. PMID:19784669

Dubey, Saguna; Sambaraju, Sandeep; Cautha, Sarat Chandra; Arya, Vednath; Chakravarthy, V S

2009-09-26

171

The Design of Experiments and the Estimation of Experimental Errors: A Necessary Preparation for Project Work  

ERIC Educational Resources Information Center

|Suggests the level at which errors could be treated in sixth form physics (England) and then describes two design exercises in which rough estimations of errors are used in the making of design decisions. (Author/PR)|

Tawney, D. A.

1972-01-01

172

Removing the Noise and Systematics while Preserving the Signal - An Empirical Bayesian Approach to Kepler Light Curve Systematic Error Correction  

NASA Astrophysics Data System (ADS)

We present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is, in turn, used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a "Bayesian Prior" and a "Bayesian Posterior" PDF (Probability Distribution Function). When maximized, the posterior PDF finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple Least Squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves versus an analytical approach, which uses a Gaussian fit to the Priors. Recent improvements to the algorithm are presented including entropy cleaning of basis vectors, better light curve normalization methods, application to short cadence data and a goodness metric which can be used to numerically evaluate the performance of the cotrending. The goodness metric can then be introduced into the merit function as a Lagrange multiplier and the fit iterated to improve performance. Funding for the Kepler Discovery Mission is provided by NASA's Science Mission Directorate.

Smith, Jeffrey C.; Stumpe, M. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.

2012-05-01

173

Inverse halftoning and kernel estimation for error diffusion  

Microsoft Academic Search

Two different approaches in the inverse halftoning of error-diffused images are considered. The first approach uses linear filtering and statistical smoothing that reconstructs a gray-scale image from a given error-diffused image. The second approach can be viewed as a projection operation, where one assumes the error diffusion kernel is known, and finds a gray-scale image that will be halftoned into

Ping Wah Wong

1995-01-01

174

Accurate estimation of phase-shifting error in digital holography  

NASA Astrophysics Data System (ADS)

Phase shifting error is a major error source in phase-shifting digital holography. It affects the quality of the reconstructed object and causes errors in its phase and amplitude calculation. This paper presents a simple method to accurately retrieve the actual phase-shifting value used in practical hologram recording. It therefore provides the possibility of completely eliminating the phase-shifting error in digital hologram. The proposed method is based on solving the object wave reconstruction equation. The effectiveness of the proposed method is verified by both mathematical analysis and computer simulation.

Zhang, Shuqun

2004-08-01

175

Nonlocal treatment of systematic errors in the processing of sparse and incomplete sensor data  

SciTech Connect

A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse and incomplete sensor data. We present a detailed application of this methodology to the construction of navigation maps from wide-angle sonar sensor data acquired by the HERMIES IIB mobile robot. Our uncertainty approach is explcitly nonlocal. We use a binary labelling scheme and a simple logic for the rule of combination. We then correct erroneous interpretations of the data by analyzing pixel patterns of conflict and by imposing consistent labelling conditions. 9 refs., 6 figs.

Beckerman, M.; Oblow, E.M.

1988-03-01

176

Estimating the Error Rate of a Prediction Rule: Improvement on Cross-Validation  

Microsoft Academic Search

We construct a prediction rule on the basis of some data, and then wish to estimate the error rate of this rule in classifying future observations. Cross-validation provides a nearly unbiased estimate, using only the original data. Cross-validation turns out to be related closely to the bootstrap estimate of the error rate. This article has two purposes: to understand better

Bradley Efron

1983-01-01

177

A Comparison of Five Methods for Estimating the Standard Error of Measurement at Specific Score Levels  

Microsoft Academic Search

The Standards for Educational and Psychological Testing (1985) recommended that test publishers pro vide multiple estimates of the standard error of mea surement—one estimate for each of a number of widely spaced score levels. The presumption is that the standard error varies across score levels, and that the interpretation of test scores should take into ac count the estimate applicable

Leonard S. Feldt; Manfred Steffen; Naim C. Gupta

1985-01-01

178

Estimating spatial and parameter error in parameterized nonlinear reaction-diffusion equations.  

SciTech Connect

A new approach is proposed for the a posteriori error estimation of both global spatial and parameter error in parameterized nonlinear reaction-diffusion problems. The technique is based on linear equations relating the linearized spatial and parameter error to the weak residual. Computable local element error indicators are derived for local contributions to the global spatial and parameter error, along with corresponding global error indicators. The effectiveness of the error indicators is demonstrated using model problems for the case of regular points and simple turning points. In addition, a new turning point predictor and adaptive algorithm for accurately computing turning points are introduced.

Carey, Graham F. (University of Texas at Austin, Austin TX); Carnes, Brian R. (University of Texas at Austin, Austin TX)

2005-05-01

179

A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces  

SciTech Connect

In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

Ju, Lili [University of South Carolina; Tian, Li [University of South Carolina; Wang, Desheng [Nanyang Technological University

2009-01-01

180

Control Estimation Error of Sampling Method for Passive Measurement  

Microsoft Academic Search

Sampling is increasingly utilized by passive measurement systems to save the resources consumption. However, the widely adopted static linear sampling selects packets with the same sampling rate (probability) for both large flows and small flows, which leads to intolerably high relative error for small flows. In order to bound the relative error for both small and large flows, we have

Chengchen Hu; Sheng Wang; Jia Tian; Bin Liu

2007-01-01

181

Error estimates and condition numbers for radial basis function interpolation  

Microsoft Academic Search

: For interpolation of scattered multivariate data by radial basis functions, an "uncertaintyrelation" between the attainable error and the condition of the interpolation matricesis proven. It states that the error and the condition number cannot both be keptsmall. Bounds on the Lebesgue constants are obtained as a byproduct. A variation of theNarcowich--Ward theory of upper bounds on the norm of

Robert Schaback; Angewandte Mathematik

1995-01-01

182

Speech enhancement using a minimum mean-square error log-spectral amplitude estimator  

Microsoft Academic Search

In this correspondence we derive a short-time spectral amplitude (STSA) estimator for speech signals which minimizes the mean-square error of the log-spectra (i.e., the original STSA and its estimator) and examine it in enhancing noisy speech. This estimator is also compared with the corresponding minimum mean-square error STSA estimator derived previously. It was found that the new estimator is very

Y. Ephraim; D. Malah

1985-01-01

183

Systematic evaluation of NASA precipitation radar estimates using NOAA/NSSL National Mosaic QPE products  

NASA Astrophysics Data System (ADS)

Proper characterization of the error structure of TRMM Precipitation Radar (PR) quantitative precipitation estimation (QPE) is needed for their use in TRMM combined products, water budget studies and hydrological modeling applications. Due to the variety of sources of error in spaceborne radar QPE (attenuation of the radar signal, influence of land surface, impact of off-nadir viewing angle, etc.) and the impact of correction algorithms, the problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements (GV) using NOAA/NSSL's National Mosaic QPE (NMQ) system. An investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) on the basis of a 3-month-long data sample. A significant effort has been carried out to derive a bias-corrected, robust reference rainfall source from NMQ. The GV processing details will be presented along with preliminary results of PR's error characteristics using contingency table statistics, probability distribution comparisons, scatter plots, semi-variograms, and systematic biases and random errors.

Kirstetter, P.; Hong, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Petersen, W. A.

2011-12-01

184

Apparent anomalies in nuclear feulgen-DNA contents. Role of systematic microdensitometric errors  

PubMed Central

The Feulgen-DNA contents of human leukocytes, sperm, and oral squames were investigated by scanning and integrating microdensitometry, both with and without correction for residual distribution error and glare. Maximally stained sperm had absorbances which at lambdamax exceeded the measuring range of the Vickers M86 microdensitometer; this potential source of error could be avoided either by using shorter hydrolysis times or by measuring at an off-peak wavelength. Small but statistically significant apparent differences between leukocyte types were found in uncorrected but not fully corrected measurements, and some apparent differences disappeared when only one of the residual instrumental errors was eliminated. In uncorrected measurements, the apparent Feulgen-DNA content of maximally stained polymorphs measured at lambdamax was significantly lower than that of squames, while in all experimental series uncorrected measurements showed apparent diploid:haploid ratios significantly greater than two. In fully corrected measurements no significant differences were found between leukocytes and squames, and in four independent estimations the lowest diploid:haploid ratio found was 1.99 +/- 0.05, and the highest 2.03 +/- 0.05. Discrepancies found in uncorrected measurements could be correlated with morphology of the nuclei concerned. Glare particularly affected measurements of relatively compact nuclei such as those of sperm, polymorphs and lymphocytes, while residual distribution error was especially marked with nuclei having a high perimeter:area ratio (e.g. sperm and polymorphs). Uncorrected instrumental errors, especially residual distribution error and glare, probably account for at least some of the previously reported apparent differences between the Feulgen-DNA contents of different cell types. On the basis of our experimental evidence, and a consideration of the published work of others, it appears that within the rather narrow limits of random experimental error there seems little or no reason to postulate either genuine differences in the amounts of DNA present in the cells studied, or nonstoichiometry of a correctly performed Feulgen reaction.

1976-01-01

185

The mathematical origins of the kinetic compensation effect: 2. The effect of systematic errors.  

PubMed

The kinetic compensation effect states that there is a linear relationship between Arrhenius parameters ln A and E for a family of related processes. It is a widely observed phenomenon in many areas of science, notably heterogeneous catalysis. This paper explores mathematical, rather than physicochemical, explanations for the compensation effect in certain situations. Three different topics are covered theoretically and illustrated by examples. Firstly, the effect of systematic errors in experimental kinetic data is explored, and it is shown that these create apparent compensation effects. Secondly, analysis of kinetic data when the Arrhenius parameters depend on another parameter is examined. In the case of temperature programmed desorption (TPD) experiments when the activation energy depends on surface coverage, it is shown that a common analysis method induces a systematic error, causing an apparent compensation effect. Thirdly, the effect of analysing the temperature dependence of an overall rate of reaction, rather than a rate constant, is investigated. It is shown that this can create an apparent compensation effect, but only under some conditions. This result is illustrated by a case study for a unimolecular reaction on a catalyst surface. Overall, the work highlights the fact that, whenever a kinetic compensation effect is observed experimentally, the possibility of it having a mathematical origin should be carefully considered before any physicochemical conclusions are drawn. PMID:22080227

Barrie, Patrick J

2011-11-14

186

Integration of the variogram using spline functions for sampling error estimation  

Microsoft Academic Search

The component of the sampling error caused by taking discrete samples from a continuous process is the integration error, IE. This error can be estimated using P.M. Gy's variographic technique. This method involves the integration of the variogram. The variogram can be calculated from a time series of discrete samples. If the variogram is simple, it can be modelled and

Riitta Heikka; Pentti Minkkinen

1998-01-01

187

Inertial sensors in estimating walking speed and inclination: an evaluation of sensor error models.  

PubMed

With the increasing interest of using inertial measurement units (IMU) in human biomechanics studies, methods dealing with inertial sensor measurement errors become more and more important. Pre-test calibration and in-test error compensation are commonly used to minimize the sensor errors and improve the accuracy of the walking speed estimation results. However, the performance of a given sensor error compensation method does not only depend on the accuracy of the calibration or the sensor error evaluation, but also strongly relies on the selected sensor error model. The best performance could be achieved only when the essential components of sensor errors are included and compensated. Two new sensor error models, with the concerns about sensor acceleration measurement biases and sensor attachment misalignment, have been developed. The performance of these two error models were evaluated in the shank-mounted IMU-based walking speed/inclination estimation algorithm with a comparison of an existing error model. The treadmill walking experiment, conducted at both level and incline conditions, demonstrated the importance of sensor error model selection on the spatio-temporal gait parameter estimation performance. Accurate walking inclination estimation was made possible with a newly developed sensor error model. PMID:22418894

Yang, Shuozhi; Laudanski, Annemarie; Li, Qingguo

2012-03-15

188

Comparative analysis of importance sampling techniques to estimate error functions for training neural networks  

Microsoft Academic Search

The application of importance sampling to train neural networks which approximates the Neyman-Pearson detector is considered in this paper. A comparative study with two different error functions is carried out. These two error functions are selected to make the Neyman-Pearson detector approximation possible. The importance sampling technique is used to estimate the error function for training. Some results are presented

Manuel Rosa-Zurera; Pilar Jarabo-Amores; F. Lopez-Ferreras; J. L. Sanz-Gonzalez

2005-01-01

189

A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings  

Microsoft Academic Search

The bookmark standard-setting procedure is an item response theory–based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error sources on standard errors. This study produced several notable results. First, different patterns

Guemin Lee; Daniel M. Lewis

2008-01-01

190

Systematic errors in the correlation method for Johnson noise thermometry: residual correlations due to amplifiers  

NASA Astrophysics Data System (ADS)

Johnson noise thermometers (JNT) measure the equilibrium electrical noise, proportional to thermodynamic temperature, of a sensing resistor. In the correlation method, the same resistor is connected to two amplifiers and a correlation of their outputs is performed, in order to reject amplifiers' noise. Such rejection is not perfect: the residual correlation gives a systematic error in the JNT reading. In order to set an upper limit, or to achieve a correction, for such error, a careful electrical modelling of the amplifiers and connections must be performed. Standard numerical simulation tools are inadequate for such modelling. In the literature, evaluations have been performed by the painstaking solving of analytical modelling. We propose an evaluation procedure for the JNT error due to residual correlations which blends analytical and numerical approaches, with the benefits of both: a rigorous and accurate circuit noise modelling, and a fast and flexible evaluation with a user-friendly commercial tool. The method is applied to a simple but very effective ultralow-noise amplifier employed in a working JNT.

Callegaro, L.; Pisani, M.; Ortolano, M.

2010-06-01

191

Offline parameter estimation using EnKF and maximum-likelihood error covariance estimates  

NASA Astrophysics Data System (ADS)

Parameterizations of physical processes represent an important source of uncertainty in climate models. These processes are governed by physical parameters and most of them are unknown and generally manually tuned. This subjective approach is excessively time demanding and gives inefficient results due to flow dependency of the parameters and potential correlations between each other. Moreover, in case of changes in horizontal resolution or parameterization scheme, the physical parameters need to be completely re-evaluated. To overcome these limitations, recent works proposed to estimate the physical parameters objectively using filtering and inverse techniques. In this presentation, we investigate this way and propose a novel offline parameter estimation approach. More precisely, we build a nonlinear state-space model resolved into a EnKF (Ensemble Kalman Filter) framework where (i) the state of the system corresponds to the unknown physical parameters, (ii) the state evolution is driven as a Gaussian random walk, (iii) the observation operator is the physical process and (iv) observations are perturbed realizations of this physical process with a given set of physical parameters. Then, we use an iterative maximum-likelihood estimation of the error covariance matrices and the first guess or background state of the EnKF. Among the error covariance matrices, we estimate the one for the state equation (Q) and the observation equation (R) respectively to keep into account correlations between physical parameters and the flow dependency of the parameters. The proper estimation of covariances instead of arbitrarily prescribing them and estimate inflation factors ensures the convergence to the optimal physical parameters. The proposed technique is implemented and used to estimate parameters from the subgrid-scale orography scheme implemented in the ECMWF (European Centre for Medium-Range Weather Forecasts) and LMDZ (Laboratoire de Météorologie Dynamique Zoom) models. Using a twin expriment, we demonstrate that our parameter estimation technique is relevant and outperforms the results with the classical EnKF implementation. Moreover, the technique is flexible and could be used in online physical parameter estimations.

Tandeo, Pierre; Pulido, Manuel

2013-04-01

192

On error estimator and adaptivity in the meshless Galerkin boundary node method  

NASA Astrophysics Data System (ADS)

In this article, a simple a posteriori error estimator and an effective adaptive refinement process for the meshless Galerkin boundary node method (GBNM) are presented. The error estimator is formulated by the difference between the GBNM solution itself and its L 2-orthogonal projection. With the help of a localization technique, the error is estimated by easily computable local error indicators and hence an adaptive algorithm for h-adaptivity is formulated. The convergence of this adaptive algorithm is verified theoretically in Sobolev spaces. Numerical examples involving potential and elasticity problems are also provided to illustrate the performance and usefulness of this adaptive meshless method.

Li, Xiaolin

2011-12-01

193

On error estimator and adaptivity in the meshless Galerkin boundary node method  

NASA Astrophysics Data System (ADS)

In this article, a simple a posteriori error estimator and an effective adaptive refinement process for the meshless Galerkin boundary node method (GBNM) are presented. The error estimator is formulated by the difference between the GBNM solution itself and its L 2-orthogonal projection. With the help of a localization technique, the error is estimated by easily computable local error indicators and hence an adaptive algorithm for h-adaptivity is formulated. The convergence of this adaptive algorithm is verified theoretically in Sobolev spaces. Numerical examples involving potential and elasticity problems are also provided to illustrate the performance and usefulness of this adaptive meshless method.

Li, Xiaolin

2012-07-01

194

The Effect of Retrospective Sampling on Estimates of Prediction Error for Multifactor Dimensionality Reduction  

PubMed Central

SUMMARY The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.

Winham, Stacey J.; Motsinger-Reif, Alison A.

2010-01-01

195

Estimation of Linear and Nonlinear Errors-in-Variables Models Using Validation Data  

Microsoft Academic Search

Consistent estimators for linear and nonlinear regression models with measurement errors in variables in the presence of validation data are proposed. The estimation procedures are based on least squares methods with regression functions replaced by wide-sense conditional expectation functions. The methods do not depend on distributional assumptions and are robust against the misspecification of a measurement error model. They are

Lung-Fei Lee; Jungsywan H. Sepanski

1995-01-01

196

Multilevel Error Estimation and Adaptive h Refinement for Cartesian Meshes with Embedded Boundaries  

Microsoft Academic Search

This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using ? ? ? ? -extrapolation.

M. J. Aftosmis; M. J. Berger

2002-01-01

197

A posteriori error estimation for elasto-plastic problems based on duality theory  

Microsoft Academic Search

In this paper we introduce a new approach to a posteriori error estimation for elasto-plastic problems based on the duality theory of the calculus of variations. We show that, in spite of the prevailing view, duality methods provide a viable way for obtaining computable a posteriori error estimates for nonlinear boundary value problems without directly solving the dual problem. Rigorous

Sergey I. Repin; Leonidas S. Xanthis

1996-01-01

198

Cross-Validation, the Jackknife, and the Bootstrap: Excess Error Estimation in Forward Logistic Regression  

Microsoft Academic Search

Given a prediction rule based on a set of patients, what is the probability of incorrectly predicting the outcome of a new patient? Call this probability the true error. An optimistic estimate is the apparent error, or the proportion of incorrect predictions on the original set of patients, and it is the goal of this article to study estimates of

Gail Gong

1986-01-01

199

Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers  

ERIC Educational Resources Information Center

Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

Kim, ChangHwan; Tamborini, Christopher R.

2012-01-01

200

The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements  

SciTech Connect

Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

Anderson, K.K.

1994-05-01

201

Bounded-error parameter estimation: Noise models and recursive algorithms  

Microsoft Academic Search

This paper deals with some issues involving a parameter estimation approach that yields estimates consistent with the data and the given a priori information. The first part of the paper deals with the relationships between various noise models and the ‘size’ of the resulting membership set, the set of parameter estimates consistent with the data and the a priori information.

Er-Wei Bai; Krishna M. Nagpal; Roberto Tempo

1996-01-01

202

Estimating Standard Errors in Finance Panel Data Sets: Comparing Approaches  

Microsoft Academic Search

In corporate finance and asset pricing empirical work, researchers are often confronted with panel data. In these data sets, the residuals may be correlated across firms or across time, and OLS standard errors can be biased. Historically, researchers in the two literatures have used different solutions to this problem. This paper examines the different methods used in the literature and

Mitchell A. Petersen

2009-01-01

203

Random and systematic field errors in the SNS ring: A study of their effects and compensation  

SciTech Connect

The Accumulator Ring for the proposed Spallation Neutron Source (SNS) is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the studies of these effects and the magnetic corrector schemes for their compensation.

Gardner, C.J.; Lee, Y.Y.; Weng, W.T. [Brookhaven National Lab., Upton, NY (United States)

1998-08-01

204

Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment  

NASA Astrophysics Data System (ADS)

The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

Stephenson, Edward J.; Storage Ring EDM Collaboration

2011-05-01

205

RANDOM AND SYSTEMATIC FIELD ERRORS IN THE SNS RING: A STUDY OF THEIR EFFECTS AND COMPENSATION  

SciTech Connect

The Accumulator Ring for the proposed Spallation Neutron Source (SNS) [l] is to accept a 1 ms beam pulse from a 1 GeV Proton Linac at a repetition rate of 60 Hz. For each beam pulse, 10{sup 14} protons (some 1,000 turns) are to be accumulated via charge-exchange injection and then promptly extracted to an external target for the production of neutrons by spallation. At this very high intensity, stringent limits (less than two parts in 10,000 per pulse) on beam loss during accumulation must be imposed in order to keep activation of ring components at an acceptable level. To stay within the desired limit, the effects of random and systematic field errors in the ring require careful attention. This paper describes the authors studies of these effects and the magnetic corrector schemes for their compensation.

GARDNER,C.J.; LEE,Y.Y.; WENG,W.T.

1998-06-22

206

Uncertainty modeling of random and systematic errors by means of Monte Carlo and fuzzy techniques  

NASA Astrophysics Data System (ADS)

The standard reference in uncertainty modeling is the “Guide to the Expression of Uncertainty in Measurement (GUM)”. GUM groups the occurring uncertain quantities into “Type A” and “Type B”. Uncertainties of “Type A” are determined with the classical statistical methods, while “Type B” is subject to other uncertainties which are obtained by experience and knowledge about an instrument or a measurement process. Both types of uncertainty can have random and systematic error components. Our study focuses on a detailed comparison of probability and fuzzy-random approaches for handling and propagating the different uncertainties, especially those of “Type B”. Whereas a probabilistic approach treats all uncertainties as having a random nature, the fuzzy technique distinguishes between random and deterministic errors. In the fuzzy-random approach the random components are modeled in a stochastic framework, and the deterministic uncertainties are treated by means of a range-of-values search problem. The applied procedure is outlined showing both the theory and a numerical example for the evaluation of uncertainties in an application for terrestrial laserscanning (TLS).

Alkhatib, Hamza; Neumann, Ingo; Kutterer, Hansjörg

2009-06-01

207

X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors  

SciTech Connect

Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

2010-07-09

208

LiDAR error estimation with WAsP engineering  

NASA Astrophysics Data System (ADS)

The LiDAR measurements, vertical wind profile in any height between 10 to 150m, are based on assumption that the measured wind is a product of a homogenous wind. In reality there are many factors affecting the wind on each measurement point which the terrain plays the main role. To model LiDAR measurements and predict possible error in different wind directions for a certain terrain we have analyzed two experiment data sets from Greece. In both sites LiDAR and met, mast data have been collected and the same conditions are simulated with RisØ/DTU software, WAsP Engineering 2.0. Finally measurement data is compared with the model results. The model results are acceptable and very close for one site while the more complex one is returning higher errors at higher positions and in some wind directions.

Bingöl, F.; Mann, J.; Foussekis, D.

2008-05-01

209

The White Dwarf Luminosity Function: Measurement Errors and Estimators  

NASA Astrophysics Data System (ADS)

The white dwarf luminosity function is an important tool for the study of the solar neighborhood, since it allows the determination of a wide range of galactic parameters, the age of the Galactic disk being the most important one. However, the white dwarf luminosity function is not free of biases induced by the measurement errors, sampling biases, the Lutz--Kelker bias or even the contamination of two, or more, kinematic populations --- like the thick disk or the halo populations. We have used a Monte Carlo simulator to generate a controlled synthetic population of disk white dwarfs and we analyze the behavior of the 1/Vmax method and of the Choloniewski method for some reasonable assumptions about the measurement errors and the contamination of the sample by the halo white dwarf population.

Torres, S.; García-Berro, E.; Geijo, E. M.; Isern, J.

2007-09-01

210

A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.  

ERIC Educational Resources Information Center

Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

Shoemaker, David M.

211

A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.  

ERIC Educational Resources Information Center

|Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)|

Shoemaker, David M.

212

Error Estimates Derived from the Data for Least-Squares Spline Fitting  

SciTech Connect

The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

Jerome Blair

2007-06-25

213

On the Impact of Channel Estimation Errors on MAC Protocols for MIMO Ad Hoc Networks  

Microsoft Academic Search

In this paper, we evaluate the performance of a Medium Access Control (MAC) protocol for MIMO ad hoc networks under imperfect channel estimation. To this end, we also present an analysis of channel estimation errors using correlator-based and Minimum Mean-Square Error (MMSE) channel estimators. Unlike similar works, we specifically focus on a scenario where the presence of several simultaneous, symbol-asynchronous

Davide Chiarotto; Paolo Casari; Michele Zorzi

2010-01-01

214

Motion and Structure From Two Perspective Views: Algorithms, Error Analysis, and Error Estimation  

Microsoft Academic Search

Deals with estimating motion parameters and the structure of the scene from point (or feature) correspondences between two perspective views. An algorithm is presented that gives a closed-form solution for motion parameters and the structure of the scene. The algorithm utilizes redundancy in the data to obtain more reliable estimates in the presence of noise. An approach is introduced to

Juyang Weng; Thomas S. Huang; Narendra Ahuja

1989-01-01

215

Numerical experiments on the efficiency of local grid refinement based on truncation error estimates  

NASA Astrophysics Data System (ADS)

Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reynolds numbers. It is shown that criteria where the truncation error is weighted by the volume of the grid cells perform better than using just the truncation error as the criterion. Also it is observed that the efficiency of local refinement increases with the Reynolds number. The truncation error is estimated by restricting the solution to a coarser grid and applying the coarse grid discrete operator. The complication that high truncation error develops at grid level interfaces is also investigated and several treatments are tested.

Syrakos, Alexandros; Efthimiou, Georgios; Bartzis, John G.; Goulas, Apostolos

2012-08-01

216

A posteriori error estimates for the Johnson-N?d?lec FEM-BEM coupling  

PubMed Central

Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h?h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches.

Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

2012-01-01

217

Nonlinear bounded-error target state estimation using redundant states  

Microsoft Academic Search

When the primary measurement sensor is passive in nature---by which we mean that it does not directly measure range or range rate---there are well-documented challenges for target state estimation. Most estimation schemes rely on variations of the Extended Kalman Filter (EKF), which, in certain situations, suffer from divergence and\\/or covariance collapse. For this and other reasons, we believe that the

James Anthony Covello

2006-01-01

218

Analysis of some systematic errors affecting altimeter-derived sea surface gradient with application to geoid determination over Taiwan  

Microsoft Academic Search

.   This paper analyzes several systematic errors affecting sea surface gradients derived from Seasat, Geosat\\/ERM, Geosat\\/GM,\\u000a ERS-1\\/35d, ERS-1\\/GM and TOPEX\\/POSEIDON altimetry. Considering the data noises, the conclusion is: (1) only Seasat needs to\\u000a correct for the non-geocentricity induced error, (2) only Seasat and Geosat\\/GM need to correct for the one cycle per revolution\\u000a error, (3) only Seasat, ERS-1\\/GM and Geosat\\/GM

C. Hwang

1997-01-01

219

Error analysis of DOA estimation for short code CDMA systems with a beamforming approach  

Microsoft Academic Search

A beamforming filter bank approach has recently been proposed to estimate the direction of arrival (DOA) in short code code-division multiple-access (CDMA) systems, exploiting both the signal formats at time domain and spatial signatures. The error in DOA estimation is caused by the inverse of a correlation matrix that is obtained with finite data support. To mitigate the DOA estimation

Chiao-Yao Chuang; Xiaoli Yu; C.-C. Jay Kuo

2005-01-01

220

Estimation of protein secondary structure and error analysis from circular dichroism spectra  

Microsoft Academic Search

The estimation of protein secondary structure from circular dichroism spectra is described by a multivar- iate linear model with noise (Gauss-Markoff model). With this formalism the adequacy of the linear model is investigated, paying special attention to the estimation of the error in the secondary structure estimates. It is shown that the linear model is only adequate for the a-helix

I VANSTOKKUM; Hans J. W. Spoelder; Michael Bloemendal; R VANGRONDELLE; Frans C. A. Groen

1990-01-01

221

A Posteriori Error Estimates for a Discontinuous Galerkin Approximation of Second-Order Elliptic Problems  

Microsoft Academic Search

Several a posteriori error estimators are introduced and analyzed for a discontin- uous Galerkin formulation of a model second-order elliptic problem. In addition to residual-type estimators, we introduce some estimators that are couched in the ideas and techniques of domain decomposition. Results of numerical experiments are presented.

Ohannes A. Karakashian; Frédéric Pascal

2003-01-01

222

Human Errors in Medical Practice: Systematic Classification and Reduction with Automated Information Systems  

Microsoft Academic Search

We review the general nature of human error(s) in complex systems and then focus on issues raised by Institute of Medicine report in 1999. From this background we classify and categorize error(s) in medical practice, including medication, procedures, diagnosis, and clerical error(s). We also review the potential role of software and technology applications in reducing the rate and nature of

D. Kopec; M. H. Kabir; D. Reinharth; O. Rothschild; J. A. Castiglione

2003-01-01

223

Towards an ocean salinity error budget estimation within the SMOS mission  

Microsoft Academic Search

The SMOS (Soil Moisture and Ocean Salinity) mission will provide from 2008 onwards global sea surface salinity estimations over the oceans. This work summarizes several insights gathered in the framework of salinity retrieval studies, aimed to address an overall salinity error budget. The paper covers issues ranging from the impact of auxiliary data on SSS error to the potential exploitation

Roberto Sabia; Adriano Camps; Mercè Vall-llossera; Marco Talone

2007-01-01

224

RELATING ERROR BOUNDS FOR MAXIMUM CONCENTRATION ESTIMATES TO DIFFUSION METEOROLOGY UNCERTAINTY (JOURNAL VERSION)  

EPA Science Inventory

The paper relates the magnitude of the error bounds of data, used as inputs to a Gaussian dispersion model, to the magnitude of the error bounds of the model output. The research addresses the uncertainty in estimating the maximum concentrations from elevated buoyant sources duri...

225

A Mediterranean sea level reconstruction (1950–2008) with error budget estimates  

Microsoft Academic Search

Reconstructed sea level fields are commonly obtained by using techniques that combine long-term records from coastal and island tide gauges with spatial covariance structures determined from recent altimetric observations. In this paper we estimate the error budget of the Mediterranean sea level reconstructions based on a reduced space optimal interpolation. In particular, we characterize the baseline error of the methodology,

F. M. Calafat; G. Jordà

2011-01-01

226

Errors and parameter estimation in precipitation-runoff modeling 2. Case study.  

USGS Publications Warehouse

A case study is presented which illustrates some of the error analysis, sensitivity analysis, and parameter estimation procedures reviewed in the first part of this paper. It is shown that those procedures, most of which come from statistical nonlinear regression theory, are invaluable in interpreting errors in precipitation-runoff modeling and in identifying appropriate calibration strategies. -Author

Troutman, B. M.

1985-01-01

227

IMPROVING PRECISION AND REDUCING BIAS IN BIOLOGICAL SURVEYS: ESTIMATING FALSE-NEGATIVE ERROR RATES  

Microsoft Academic Search

The use of presence\\/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we

Andrew J. Tyre; Brigitte Tenhumberg; Scott A. Field; Darren Niejalke; Kirsten Parris; Hugh P. Possingham

2003-01-01

228

Accurate and fast methods to estimate the population mutation rate from error prone sequences  

Microsoft Academic Search

BACKGROUND: The population mutation rate (?) remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error

Bjarne Knudsen; Michael M. Miyamoto

2009-01-01

229

A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings  

ERIC Educational Resources Information Center

The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error

Lee, Guemin; Lewis, Daniel M.

2008-01-01

230

Error estimate of short-range force calculation in inhomogeneous molecular systems.  

PubMed

In the present paper, we develop an accurate error estimate of the nonbonded short-range interactions for the inhomogeneous molecular systems. The root-mean-square force error is proved to be decomposed into three additive parts: the homogeneity error, the inhomogeneity error, and the correlation error. The magnitude of the inhomogeneity error, which is dominant in the interfacial regions, can be more than one order of magnitude larger than the homogeneity error. This is the reason why a standard simulation with fixed cutoff radius is either less accurate if the cutoff is too small, or wastes considerable computational effort if the cutoff is too large. Therefore, based on the error estimate, the adaptive cutoff and long-range force correction methods are proposed to boost the efficiency and accuracy of the simulation, respectively. The way of correcting the long-range contribution of pressure is also developed for the inhomogeneous system. The effectiveness of the proposed methods is demonstrated by molecular dynamics simulations of the liquid-vapor equilibrium and the nanoscale particle collision. Different roles of the homogeneity error and inhomogeneity error are also discussed. PMID:23005879

Wang, Han; Schütte, Christof; Zhang, Pingwen

2012-08-08

231

Systematic residual ionospheric errors in radio occultation data and a potential way to minimize them  

NASA Astrophysics Data System (ADS)

Radio occultation (RO) sensing is used to probe the earth's atmosphere in order to obtain information about its physical properties. With a main interest in the parameters of the neutral atmosphere, there is the need to perform a correction of the ionospheric contribution to the bending angle. Since this correction is an approximation to first order, there exists an ionospheric residual, which can be expected to be larger when the ionization is high (day versus night, high versus low solar activity). The ionospheric residual systematically affects the accuracy of the atmospheric parameters at low altitudes, at high altitudes (above 25-30 km) it even is an important error source. In climate applications this could lead to a time dependent bias which induces wrong trends in atmospheric parameters at high altitudes. The first goal of our work was to study and characterize this systematic residual error. In a second step we developed a simple correction method, based purely on observational data, to reduce this residual for large ensembles of RO profiles. In order to tackle this problem, we analyzed the bending angle bias of CHAMP and COSMIC RO data from 2001-2011. We could observe that the nighttime bending angle bias stays constant over the whole period of 11 yr, while the daytime bias increases from low to high solar activity. As a result, the difference between nighttime and daytime bias increases from about -0.05 ?rad to -0.4 ?rad. This behavior paves the way to correct the solar cycle dependent bias of daytime RO profiles. In order to test the newly developed correction method we performed a simulation study, which allowed to separate the influence of the ionosphere and the neutral atmosphere. Also in the simulated data we observed a similar increase in the bias in times from low to high solar activity. In this simulation we performed the climatological ionospheric correction of the bending angle data, by using the bending angle bias characteristics of a solar cycle as a correction factor. After the climatological ionospheric correction the bias of the simulated data improved significantly, not only in the bending angle but also in the retrieved temperature profiles.

Danzer, J.; Scherllin-Pirscher, B.; Foelsche, U.

2013-08-01

232

Application of an Empirical Bayesian Technique to Systematic Error Correction and Data Conditioning of Kepler Photometry  

NASA Astrophysics Data System (ADS)

We present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves versus an analytical approach, which uses a Gaussian fit to the Priors. Along with the systematic effects there are also Sudden Pixel Sensitivity Dropouts (SPSDs) resulting in abrupt steps in the light curves that should be removed. A joint fitting technique is therefore presented that simultaneously applies MAP and SPSD removal. The concept will be illustrated in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through the present. We show that the light curve correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. The benefits of MAP is shown applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability of transiting planets. We conclude with a discussion of current work on selecting input vectors for the design matrix, generating the Prior PDFs and suppressing high-frequency noise injection with Bandpass Filtering. Funding for this work is provided by the NASA Science Mission Directorate.

Smith, Jeffrey C.; Jenkins, J. M.; Van Cleve, J. E.; Kolodziejczak, J.; Twicken, J. D.; Stumpe, M. C.; Fanelli, M. N.

2011-05-01

233

EIA Corrects Errors in Its Drilling Activity Estimates Series  

EIA Publications

The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as aleading indicator of trends in the industry and a barometer of general industry status.

Information Center

1998-03-01

234

Implicit polynomial representation through a fast fitting error estimation.  

PubMed

This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed approach. PMID:21965211

Rouhani, Mohammad; Sappa, Angel Domingo

2011-09-29

235

Rainfall Estimation in the Sahel. Part I: Error Function  

Microsoft Academic Search

Rainfall estimation in semiarid regions remains a challenging issue because it displays great spatial and temporal variability and networks available for monitoring are often of low density. This is especially the case in the Sahel, a region of 3 million km2 where the life of populations is still heavily dependent on rain for agriculture. Whatever the data and sensors available

Abdou Ali; Thierry Lebel; Abou Amani

2005-01-01

236

Gap filling strategies and error in estimating annual soil respiration  

Technology Transfer Automated Retrieval System (TEKTRAN)

Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

237

Error in body weight estimation leads to inadequate parenteral anticoagulation.  

PubMed

Parenteral anticoagulation is a cornerstone in the management of venous and arterial thrombosis. Unfractionated heparin has a wide dose/response relationship, requiring frequent and troublesome laboratorial follow-up. Because of all these factors, low-molecular-weight heparin use has been increasing. Inadequate dosage has been pointed out as a potential problem because the use of subjectively estimated weight instead of real measured weight is common practice in the emergency department (ED). To evaluate the impact of inadequate weight estimation on enoxaparin dosage, we investigated the adequacy of anticoagulation of patients in a tertiary ED where subjective weight estimation is common practice. We obtained the estimated, informed, and measured weight of 28 patients in need of parenteral anticoagulation. Basal and steady-state (after the second subcutaneous shot of enoxaparin) anti-Xa activity was obtained as a measure of adequate anticoagulation. The patients were divided into 2 groups according the anticoagulation adequacy. From the 28 patients enrolled, 75% (group 1, n = 21) received at least 0.9 mg/kg per dose BID and 25% (group 2, n = 7) received less than 0.9 mg/kg per dose BID of enoxaparin. Only 4 (14.3%) of all patients had anti-Xa activity less than the inferior limit of the therapeutic range (<0.5 UI/mL), all of them from group 2. In conclusion, when weight estimation was used to determine the enoxaparin dosage, 25% of the patients were inadequately anticoagulated (anti-Xa activity <0.5 UI/mL) during the initial crucial phase of treatment. PMID:20825842

dos Reis Macedo, Leon Gustavo; de Oliveira, Luciana; Pintão, Maria Carolina; Garcia, Andrea Aparecida; Pazin-Filho, Antônio

2010-04-02

238

Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.  

ERIC Educational Resources Information Center

|Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)|

Olejnik, Stephen F.; Algina, James

1987-01-01

239

Discretization Error Estimation and Exact Solution Generation Using the Method Of Nearby Problems.  

National Technical Information Service (NTIS)

The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fit...

A. Raju A. J. Sinclair C. J. Roy M. J. Kurzen T. S. Phillips

2011-01-01

240

Poisson Models and Mean-Squared Error for Correlator Estimators of Time Delay,  

National Technical Information Service (NTIS)

A method for modeling large errors in correlation-based time delay estimation is developed in terms of level crossing probabilities. The level crossing interpretation for peak ambiguity leads directly to an exact expression for the probability of large er...

A. O. Hero S. C. Schwartz

1988-01-01

241

Error analysis for estimation of trace vapor concentration pathlength in stack plumes.  

PubMed

Near-infrared hyperspectral imaging is finding utility in remote sensing applications such as detection and quantification of chemical vapor effluents in stack plumes. Optimizing the sensing system or quantification algorithms is difficult because reference images are rarely well characterized. The present work uses a radiance model for a down-looking scene and a detailed noise model for dispersive and Fourier transform spectrometers to generate well-characterized synthetic data. These data were used with a classical least-squares-based estimator in an error analysis to obtain estimates of different sources of concentration-pathlength quantification error in the remote sensing problem. Contributions to the overall quantification error were the sum of individual error terms related to estimating the background, atmospheric corrections, plume temperature, and instrument signal-to-noise ratio. It was found that the quantification error depended strongly on errors in the background estimate and second-most on instrument signal-to-noise ratio. Decreases in net analyte signal (e.g., due to low analyte absorbance or increasing the number of analytes in the plume) led to increases in the quantification error as expected. These observations have implications on instrument design and strategies for quantification. The outlined approach could be used to estimate detection limits or perform variable selection for given sensing problems. PMID:14658692

Gallagher, Neal B; Wise, Barry M; Sheen, David M

2003-06-01

242

Spatio-temporal Error on the Discharge Estimates for the SWOT Mission  

NASA Astrophysics Data System (ADS)

The Surface Water and Ocean Topography (SWOT) mission measures two key quantities over rivers: water surface elevation and slope. Water surface elevation from SWOT will have a vertical accuracy, when averaged over approximately one square kilometer, on the order of centimeters. Over reaches from 1-10 km long, SWOT slope measurements will be accurate to microradians. Elevation (depth) and slope offer the potential to produce discharge as a derived quantity. Estimates of instantaneous and temporally integrated discharge from SWOT data will also contain a certain degree of error. Two primary sources of measurement error exist. The first is the temporal sub-sampling of water elevations. For example, SWOT will sample some locations twice in the 21-day repeat cycle. If these two overpasses occurred during flood stage, an estimate of monthly discharge based on these observations would be much higher than the true value. Likewise, if estimating maximum or minimum monthly discharge, in some cases, SWOT may miss those events completely. The second source of measurement error results from the instrument's capability to accurately measure the magnitude of the water surface elevation. How this error affects discharge estimates depends on errors in the model used to derive discharge from water surface elevation. We present a global distribution of estimated relative errors in mean annual discharge based on a power law relationship between stage and discharge. Additionally, relative errors in integrated and average instantaneous monthly discharge associated with temporal sub-sampling over the proposed orbital tracks are presented for several river basins.

Biancamaria, S.; Alsdorf, D. E.; Andreadis, K. M.; Clark, E.; Durand, M.; Lettenmaier, D. P.; Mognard, N. M.; Oudin, Y.; Rodriguez, E.

2008-12-01

243

Treatment of systematic errors in the processing of wide-angle sonar sensor data for robotic navigation  

Microsoft Academic Search

A methodology has been developed for the treatment of systematic errors that arise in the processing of sparse sensor data. A detailed application of this methodology to the construction, from wide-angle sonar sensor data, of navigation maps for use in autonomous robotic navigation is presented. In the methodology, a four-valued labeling scheme and a simple logic for label combination are

M. Beckerman; E. M. Oblow

1990-01-01

244

UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS  

EPA Science Inventory

Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

245

Effect of channel estimation error on M-QAM BER performance in Rayleigh fading  

Microsoft Academic Search

We determine the bit-error rate (BER) of multilevel quadrature amplitude modulation (M-QAM) in flat Rayleigh fading with imperfect channel estimates, Despite its high spectral efficiency, M-QAM is not commonly used over fading channels because of the channel amplitude and phase variation. Since the decision regions of the demodulator depend on the channel fading, estimation error of the channel variation can

Xiaoyi Tang; Mohamed-Slim Alouini; Andrea J. Goldsmith

1999-01-01

246

Effective stress-based finite element error estimation for composite bodies  

Microsoft Academic Search

This paper presents a discretization error estimator for displacement-based finite element analysis applicable to multi-material bodies such as composites. The proposed method applies a specific stress continuity requirement across the intermaterial boundary consistent with physical principles. This approach estimates the discretization error by comparing the discontinuous finite element effective stress function with a smoothed (C0 continuous) effective stress function for

S. K. Choudhary; I. R. Grosse

1995-01-01

247

Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.  

SciTech Connect

This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

2006-10-01

248

Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates  

SciTech Connect

We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

2009-01-01

249

An error-weighted regularization algorithm for image motion-field estimation.  

PubMed

Local motion measurement errors are used to guide the global smoothing process in order to preserve motion-field discontinuities. A field-smoothing algorithm based on matching-error weighting is proposed. The added computation is minimal, since it uses byproducts of the local measurement process. The error-weighting functional provides significantly improved motion field estimates, as measured by motion-compensated interpolation performance. However, the mean-square reconstruction error is somewhat higher than that obtained by performing the much more computationally expensive stochastic optimization. PMID:18296212

Zheng, H; Blostein, S D

1993-01-01

250

Measurement of Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter  

NASA Astrophysics Data System (ADS)

The Storage Ring EDM Collaboration was using the Cooler Synchrotron (COSY) and the EDDA detector at the Forschungszentrum J"ulich to explore systematic errors in very sensitive storage-ring polarization measurements. Polarized deuterons of 235 MeV were used. The analyzer target was a block of 17 mm thick carbon placed close to the beam so that white noise applied to upstream electrostatic plates increases the vertical phase space of the beam, allowing deuterons to strike the front face of the block. For a detector acceptance that covers laboratory angles larger than 9 ^o, the efficiency for particles to scatter into the polarimeter detectors was about 0.1% (all directions) and the vector analyzing power was about 0.2. Measurements were made of the sensitivity of the polarization measurement to beam position and angle. Both vector and tensor asymmetries were measured using beams with both vector and tensor polarization. Effects were seen that depend upon both the beam geometry and the data rate in the detectors.

Imig, Astrid; Stephenson, Edward

2009-10-01

251

Error Covariance Matrix Estimation of Noisy and Dynamically Coupled Time Series  

NASA Astrophysics Data System (ADS)

We estimate the covariance matrix of the errors in several dynamically coupled time series corrupted by measurement errors. We say that several scalar time series are dynamically coupled if they record the values of measurements of the state variables of the same smooth dynamical system. The estimation of the covariance matrix of the errors is made using a noise reduction algorithm that efficiently exploits the information contained jointly in the dynamically coupled noisy time series. The method is particularly powerful for short length time series with high uncertainties.

Mera, Maria Eugenia; Morán, Manuel

2013-01-01

252

Surgeon error in performing intraoperative estimation of stem anteversion in cementless total hip arthroplasty.  

PubMed

To examine the accuracy of intraoperative estimation of stem anteversion in total hip arthroplasty (THA), we compared the intraoperatively estimated stem anteversion (estimated prosthetic anteversion) to stem anteversion measured by postoperative computed tomography (true anteversion) in 73 hips in 73 patients. Estimated prosthetic anteversion was significantly greater than true anteversion by 5.8°, and the mean absolute value of surgeon error was 7.3° ranging from 11° underestimation to 25° overestimation. Surgeons tended to overestimate when the true anteversion was smaller. A multivariate analysis showed that advanced knee osteoarthritis significantly increased surgeon error. These results indicated that estimated prosthetic anteversion was generally larger than true anteversion and that the grade of knee osteoarthritis affected the degree of surgeon error. PMID:23602234

Hirata, Masanobu; Nakashima, Yasuharu; Ohishi, Masanobu; Hamai, Satoshi; Hara, Daisuke; Iwamoto, Yukihide

2013-04-17

253

Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.  

SciTech Connect

An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.

Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.

2005-07-01

254

Mitigation of Error Propagation in Decision Directed OFDM Channel Tracking Using Generalized M Estimators  

Microsoft Academic Search

We treat decision-directed channel tracking (DDCT) in mobile orthogonal frequency-division multiplexing (OFDM) systems as an outlier contaminated Gaussian regression problem, where the source of outliers are the incorrect symbol decisions. Existing decision-directed estimators such as the expectation-maximization (EM)-based estimators and the 2-D-MMSE estimator do not appropriately downweight incorrect\\/poor decisions while defining the channel estimator, and hence suffer from error propagation

Sheetal Kalyani; Krishnamurthy Giridhar

2007-01-01

255

B-spline goal-oriented error estimators for geometrically nonlinear rods  

NASA Astrophysics Data System (ADS)

We consider goal-oriented a posteriori error estimators for the evaluation of the errors on quantities of interest associated with the solution of geometrically nonlinear curved elastic rods. For the numerical solution of these nonlinear one-dimensional problems, we adopt a B-spline based Galerkin method, a particular case of the more general isogeometric analysis. We propose error estimators using higher order "enhanced" solutions, which are based on the concept of enrichment of the original B-spline basis by means of the "pure" k-refinement procedure typical of isogeometric analysis. We provide several numerical examples for linear and nonlinear output functionals, corresponding to the rotation, displacements and strain energy of the rod, and we compare the effectiveness of the proposed error estimators.

Dedè, L.; Santos, H. A. F. A.

2012-01-01

256

Improved Estimation in Multiple Linear Regression Models with Measurement Error and General Constraint  

PubMed Central

In this paper, we define two restricted estimators for the regression parameters in a multiple linear regression model with measurement errors when prior information for the parameters is available. We then construct two sets of improved estimators which include the preliminary test estimator, the Stein-type estimator and the positive rule Stein type estimator for both slope and intercept, and examine their statistical properties such as the asymptotic distributional quadratic biases, the asymptotic distributional quadratic risks. We remove the distribution assumption on the error term, which was generally imposed in the literature, but provide a more general investigation of comparison of the quadratic risks for these estimators. Simulation studies illustrate the finite-sample performance of the proposed estimators, which are then used to analyze a dataset from the Nurses Health Study.

Liang, Hua; Song, Weixing

2009-01-01

257

Estimation of grid-induced errors in computational fluid dynamics solutions using a discrete error transport equation  

NASA Astrophysics Data System (ADS)

Computational fluid dynamics (CFD) has become a widely used tool in research and engineering for the study of a wide variety of problems. However, confidence in CFD solutions is still dependent on comparisons with experimental data. In order for CFD to become a trusted resource, a quantitative measure of error must be provided for each generated solution. Although there are several sources of error, the effects of the resolution and quality of the computational grid are difficult to predict a priori. This grid-induced error is most often attenuated by performing a grid refinement study or using solution adaptive grid refinement. While these methods are effective, they can also be computationally expensive and even impractical for large, complex problems. This work presents a method for estimating the grid-induced error in CFD solutions of the Navier-Stokes and Euler equations using a single grid and solution or a series of increasingly finer grids and solutions. The method is based on the discrete error transport equation (DETE), which is derived directly from the discretized PDE and provides a value of the error at every cell in the computational grid. The DETE is developed for two-dimensional, laminar Navier-Stokes and Euler equations within a generalized unstructured finite volume scheme, such that an extension to three dimensions and turbulent flow would follow the same approach. The usefulness of the DETE depends on the accuracy with which the source term, the grid-induced residual, can be modeled. Three different models for the grid-induced residual were developed: the AME model, the PDE model, and the extrapolation model. The AME model consists of the leading terms of the remainder of a simplified modified equation. The PDE model creates a polynomial fit of the CFD solution and then uses the original PDE in differential form to calculate the residual. Both the AME and PDE are used with a single grid and solution. The extrapolation model uses a fine grid solution to calculate the grid-induced residual on the coarse grid and then extrapolates that residual back to the fine grid. The DETE and residual models were then evaluated for four flow problems: (1) steady flow past a circular cylinder; (2) steady, transonic flow past an airfoil; (3) unsteady flow of an isentropic vortex; (4) unsteady flow past a circular cylinder with vortex shedding. Results demonstrate the fidelity of the DETE with each residual model as well as usefulness of the DETE as a tool for predicting the grid-induced error in CFD solutions.

Williams, Brandon Riley

258

Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28  

ERIC Educational Resources Information Center

Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

Guo, Hongwen; Sinharay, Sandip

2011-01-01

259

Variance of Weighted Regression Estimators when Sampling Errors are Independent and Heteroscedastic  

Microsoft Academic Search

General results are obtained for an approximation to the variance of a weighted regression estimator in which the weights are sample estimators of unknown unpatterned variances. Independent normally distributed errors are specified for the linear response model used in the development. Applications to four examples of weighted sample means and linear regression in a single factor are studied. The most

T. R. Bement; J. S. Williams

1969-01-01

260

Maximum-likelihood priority-first search decodable codes for combined channel estimation and error correction  

Microsoft Academic Search

The coding technique that combines channel estimation and error correction has received attention recently, and has been regarded as a promising approach to counter the effects of multipath fading. It has been shown by simulation that a proper code design that jointly considers channel estimation can improve the system performance subject to a fixed code rate as compared to a

Chia-Lung Wu; Po-Ning Chen; Yunghsiang S. Han; Ming-Hsin Kuo

2009-01-01

261

Effect of channel estimation error onto the BER performance of PSAM-OFDM in Rayleigh fading  

Microsoft Academic Search

In this paper, the current analysis focuses on the influence of BER performance in Rayleigh fading propagation environments, which results from the channel estimation error of pilot symbol assisted modulation (PSAM) in orthogonal frequency division multiplexing (OFDM) systems. This paper first characterizes the distribution of the amplitude and phase estimates using PSAM, and the formula for BER as a function

Jiming Chen; Youxi Tang; Shaoqian Li; Yingtao Li

2003-01-01

262

Influence of errors in radar rainfall estimates on hydrological modeling prediction uncertainty  

NASA Astrophysics Data System (ADS)

This study aims to assess the impact of a class of radar rainfall errors on prediction uncertainty of a conceptual water balance model. Uncertainty assessment is carried out by means of the Generalized Likelihood Uncertainty Estimation procedure (GLUE). The effects of model input and structural error are separated, and the potential for compensating errors between them is investigated. It is shown that the radar rainfall bias term operates in a multiplicative sense on the model structural uncertainty, by either magnifying or reducing it according to the sign of the bias. The results show also that adjustment of radar rainfall, aimed to remove local biases and to reduce random errors, allows ensuring that a larger percentage of the observed flows are enclosed by the uncertainty bounds, with respect to nonadjusted radar input. However, this is obtained at the price of increasing the wideness of the uncertainty bounds. This effect is emphasized with increasing the radar beam elevation. As a second step, the issue of the impact of radar rainfall estimation error on model parameter distribution and parameter transferability across sites under the radar umbrella is examined. Radar data at different radar beam elevations are used to simulate radar estimation errors at different distances from the radar site and to analyze the impact of these errors on prediction uncertainty. The results show that distortion of parameter distribution due to radar error may be considerable and that adjustment of radar rainfall estimates improves the regionalization potential of radar-based precipitation estimates (at least for ranges less than 70 km).

Borga, Marco; Degli Esposti, Silvia; Norbiato, Daniele

2006-08-01

263

Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays  

Microsoft Academic Search

When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography

Shan Shan; Michael Bevis; Eric Kendrick; Gerald L. Mader; David Raleigh; Kenneth Hudnut; Michael Sartori; David Phillips

2007-01-01

264

A posteriori error estimation by postprocessor independent of method of flowfield calculation  

Microsoft Academic Search

We consider a postprocessor that is able to analyze the flow-field generated by an external (unknown) code so as to determine the error of useful functionals. The residuals engendered by the action of a high-order finite-difference stencil on a numerically computed flow-field are used for adjoint based a posteriori error estimation. The method requires information on the physical model (PDE

A. K. Alekseev; I. M. Navon

2006-01-01

265

Consequences of genotyping errors for estimation of clonality: a case study on Populus euphratica Oliv. (Salicaceae)  

Microsoft Academic Search

A study including eight microsatellite loci for 1,014 trees from seven mapped stands of the partially clonal Populus euphratica was used to demonstrate how genotyping errors influence estimates of clonality. With a threshold of 0 (identical multilocus\\u000a genotypes constitute one clone) we identified 602 genotypes. A threshold of 1 (compensating for an error in one allele) lowered\\u000a this number to

M. Schnittler; P. Eusemann

2010-01-01

266

Hyperbolic conservation laws on manifolds. Error estimate for nite volume schemes  

Microsoft Academic Search

Following Ben-Artzi and LeFloch, we consider nonlinear hyperbolic conservation laws posed on a Riemannian manifold, and we establish an L1-error estimate for a class of nite volume schemes allowing for the approximation of entropy solutions to the initial value problem. The error in the L1 norm is of order h1=4 at most, where h represents the maximal diameter of elements

Philippe G. LeFloch; Wladimir Neves; Baver Okutmustur

267

Hyperbolic conservation laws on manifolds. Error estimate for finite volume schemes  

Microsoft Academic Search

Following Ben-Artzi and LeFloch, we consider nonlinear hyperbolic conservation laws posed on a Riemannian manifold, and we establish an L1-error estimate for a class of finite volume schemes allowing for the approximation of entropy solutions to the initial value problem. The error in the L1 norm is of order h^(1\\/4) at most, where h represents the maximal diameter of elements

Philippe G. LeFloch; Wladimir Neves; Baver Okutmustur

2008-01-01

268

On the asymptotic performance analysis of subspace DOA estimation in the presence of modeling errors: case of MUSIC  

Microsoft Academic Search

This paper provides a new analytic expression of the bias and RMS error (root mean square) error of the estimated direction of arrival (DOA) in the presence of modeling errors. In , first-order approximations of the RMS error are derived, which are accurate for small enough perturbations. However, the previously available expressions are not able to capture the behavior of

Anne Ferréol; Pascal Larzabal; Mats Viberg

2006-01-01

269

Reasons for software effort estimation error: impact of respondent role, information collection approach, and data analysis method  

Microsoft Academic Search

This study aims to improve analyses of why errors occur in software effort estimation. Within one software development company, we collected information about estimation errors through: 1) interviews with employees in different roles who are responsible for estimation, 2) estimation experience reports from 68 completed projects, and 3) statistical analysis of relations between characteristics of the 68 completed projects and

M. Jorgensen; K. Molokken-Ostvold

2004-01-01

270

Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling  

NASA Astrophysics Data System (ADS)

A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10 synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr-1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr-1 in North America to 7 Tg yr-1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems. Future inversions should include more accurately prescribed observation covariances matrices in order to limit the impact of transport model errors on estimated methane fluxes.

Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.

2013-10-01

271

Error estimation procedure for large dimensionality data with small sample sizes  

NASA Astrophysics Data System (ADS)

Using multivariate data analysis to estimate the classification error rates and separability between sets of data samples is a useful tool for understanding the characteristics of data sets. By understanding the classifiability and separability of the data, one can better direct the appropriate resources and effort to achieve the desired performance. The following report describes our procedure for estimating the separability of given data sets. The multivariate tools described in this paper include calculating the intrinsic dimensionality estimates, Bayes error estimates, and the Friedman-Rafsky tests. These analysis techniques are based on previous work used to evaluate data for synthetic aperture radar (SAR) automatic target recognition (ATR), but the current work is unique in the methods used to analyze large dimensionality sets with a small number of samples. The results of this report show that our procedure can quantitatively measure the performance between two data sets in both the measure and feature space with the Bayes error estimator procedure and the Friedman- Rafsky test, respectively. Our procedure, which included the error estimation and Friedman-Rafsky test, is used to evaluate SAR data but can be used as effective ways to measure the classifiability of many other multidimensional data sets.

Williams, Arnold; Wagner, Gregory

2009-05-01

272

The role of estimation error in probability density function of soil hydraulic parameters: Pedotop scale  

NASA Astrophysics Data System (ADS)

For modeling of transport processes and for prognosis of their results, the knowledge of hydrodynamic parameters is required. Soil hydrodynamic parameters are determined in the field by methods based upon certain approximations and the procedure of inverse solution is applied. The estimate of a parameter P includes therefore an error e. Hydrodynamic parameters are variable to a different degree like other soil properties even over the region of one pedotaxon and the knowledge of their probability density function PDF is frequently required. We have used five approximate infiltration equations for the estimation of sorptivity S and saturated hydraulic conductivity K. Distribution of both parameters was determined with regard to the type of applied infiltration equation. PDF of parameters was not identical when we compared the parameter estimates derived by various infiltration equations. As it follows from this comparative study, the estimation error e deforms PDF of the parameter estimates.

Kutílek, Miroslav; Krejca, Miroslav; Kupcová-Vlašimská, Jana

273

Improvement of absolute accuracy for a multiple bounce reflectometer through a detailed effort to reduce systematic errors.  

PubMed

We have constructed a multiple-bounce reflectometer similar to the one designed by Kelsall. Systematic errors present the reflectivities measured on our multiple-bounce reflectometer have been significantly reduced for high-reflectivity mirrors. This reduction came about through careful study of all components in our system. The diffraction effects of the apertures in the reflectometer were studied with the aid of a computer program that calculated the radial intensity distribution due to the Fresnel diffraction from each aperture. The systematic errors arising from the attempt to measure a concave spherical mirror were studied and minimized. A second computer program calculated the systematic error introduced by pathlength changes as the number of bounces in the reflectometer increased. The replacement of the sample thermopile detector with a thin-film bolometer along with other equipment changes has improved the absolute accuracy to 0.001 for the measured reflectivities with a demonstrated precision of 0.0003. Previous measurements had been about 0.008 low compared to measurements on the V-W pass reflectometer at China Lake, California. PMID:20125563

Wetzel, M G; Saito, T T; Patterson, S R

1973-07-01

274

Error Analysis for Estimation of Trace Vapor Concentration Pathlength in Stack Plumes  

SciTech Connect

Near infrared hpyerspectral imaging is finding utility in remote sensing applications such as detection and quantification of chemical vapor effluents in stack plumes. Optimizing the sensing system or quantification algorithms is difficult since reference images are rarely well characterized. The present work uses a radiance model for a down looking scene and a detailed noise model for a dispersive and Fourier transform spectrometer to generate well-characterized synthetic data. These data were used in conjunction with a classical least squares based estimation procedure in an error analysis to provide estimates of different sources of concentration-pathlength quantification error in the remote sensing problem.

Gallagher, Neal B.; Wise, Barry M.; Sheen, David M.

2003-06-01

275

Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries  

SciTech Connect

We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.

Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B. [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Am Muehlenberg 1, D-14476 Golm (Germany); Ajith, P. [LIGO Laboratory, California Institute of Technology, Pasadena, California 91125 (United States); Theoretical Astrophysics, California Institute of Technology, Pasadena, California 91125 (United States); Bruegmann, B. [Theoretisch-Physikalisches Institut, Friedrich Schiller Universitaet Jena, Max-Wien-Platz 1, 07743 Jena (Germany); Hannam, M. [Faculty of Physics, University of Vienna, Boltzmanngasse 5, A-1090 Vienna (Austria); Husa, S.; Pollney, D. [Departament de Fisica, Universitat de les Illes Balears, Carretera Valldemossa km 7.5, E-07122 Palma (Spain); Reisswig, C. [Theoretical Astrophysics, California Institute of Technology, Pasadena, California 91125 (United States); Seiler, J. [NASA Goddard Space Flight Center, Greenbelt, Maryland 20771 (United States)

2010-09-15

276

Correcting the systematic error of the density functional theory calculation: the alternate combination approach of genetic algorithm and neural network  

NASA Astrophysics Data System (ADS)

The alternate combinational approach of genetic algorithm and neural network (AGANN) has been presented to correct the systematic error of the density functional theory (DFT) calculation. It treats the DFT as a black box and models the error through external statistical information. As a demonstration, the AGANN method has been applied in the correction of the lattice energies from the DFT calculation for 72 metal halides and hydrides. Through the AGANN correction, the mean absolute value of the relative errors of the calculated lattice energies to the experimental values decreases from 4.93% to 1.20% in the testing set. For comparison, the neural network approach reduces the mean value to 2.56%. And for the common combinational approach of genetic algorithm and neural network, the value drops to 2.15%. The multiple linear regression method almost has no correction effect here.

Wang, Ting-Ting; Li, Wen-Long; Chen, Zhang-Hui; Miao, Ling

2010-07-01

277

On the Systematic Errors in the Astrometric Catalogues ACR and CMC13 Based on CCD Drift Scanning Observations  

NASA Astrophysics Data System (ADS)

Error analyses are made of ACR (Astrometric Calibration Regions along the celestial equator) and CMC13 (Carlsberg Meridian Catalogue 13), two astrometric catalogues compiled on the basis of CCD drift scanning observations and published respectively before and after 2000. Through a comparison with the UCAC2 (the second U.S. Naval Observatory CCD Astrograph Catalogue), the form and size of the errors are analyzed numerically. The main and possible sources of the errors are analyzed from the standpoint of observing mode and data reduction. It is found that there is evident magnitude difference between the ACR and CMC13 in the equatorial direction, and that there exists periodic variation close to the CCD field of view along the right ascension and also a systematic variation close to the size of reduction zone along the declination.

Jiang, Li-Ping

2008-04-01

278

A note on bias and mean squared error in steady-state quantile estimation  

NASA Astrophysics Data System (ADS)

When using a batch means methodology for estimation of a nonlinear function of a steady-state mean from the output of simulation experiments, it has been shown that a jackknife estimator may reduce the bias and mean squared error (mse) compared to the classical estimator, whereas the average of the classical estimators from the batches (the batch means estimator) has a worse performance from the point of view of bias and mse. In this paper we show that, under reasonable assumptions, the performance of the jackknife, classical and batch means estimators for the estimation of quantiles of the steady-state distribution exhibit similar properties as in the case of the estimation of a nonlinear function of a steady-state mean. We present some experimental results from the simulation of the waiting time in queue for an M/M/1 system under heavy traffic.

Muñoz, David F.; Ramírez-López, Adán

2013-10-01

279

Apparent anomalies in nuclear feulgen-DNA contents. Role of systematic microdensitometric errors  

Microsoft Academic Search

The Feulgen-DNA contents of human leukocytes, sperm, and oral squames were investigated by scanning and integrating microdensitometry, both with and with- out correction for residual distribution error and glare. Maximally stained sperm had absorbances which at hmax exceeded the measur- ing range of the Vickers M86 microdensitometer; this potential source of error could be avoided either by using shorter hydrolysis

K. S. Bedi; D. J. GOLDSTEIN

1976-01-01

280

Systematic error in behavioural measurement: Comparing results from interview and time budget studies  

Microsoft Academic Search

Data collected by survey methodology are sensitive to measurement errors. Factors of memory, understanding, and willingness to respond truthfully, distort the quality of results. In this paper, time diaries were used as a quality check for results obtained by direct interviews and questionnaires. Data is based on surveys carried out by Statistics Finland. Comparison showed that measurement error varied considerably

Iiris Niemi

1993-01-01

281

Estimation and testing of higher-order spatial autoregressive panel data error component models  

NASA Astrophysics Data System (ADS)

This paper develops an estimator for higher-order spatial autoregressive panel data error component models with spatial autoregressive disturbances, SARAR( R, S). We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define a generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators, derive their joint asymptotic distribution, and provide Monte Carlo evidence on their small sample performance.

Badinger, Harald; Egger, Peter

2013-10-01

282

A posteriori error estimation of the representer method for single-phase Darcy flow  

Microsoft Academic Search

A framework for goal-oriented a posteriori error estimation is developed for the representer method, a data assimilation scheme due to Bennett [A.F. Bennet, Inverse Methods in Physical Oceanography, Cambridge University Press, New York, NY, 1992]. This framework is used to derive an estimate in the case of a mixed finite element approximation of a linear parabolic equation modeling single-phase Darcy

John Baird; Clint Dawson

2007-01-01

283

Error estimates of triangular finite elements under a weak angle condition  

NASA Astrophysics Data System (ADS)

In this note, by analyzing the interpolation operator of Girault and Raviart given in [V. Girault, P.A. Raviart, Finite element methods for Navier-Stokes equations, Theory and algorithms, in: Springer Series in Computational Mathematics, Springer-Verlag, Berlin,1986] over triangular meshes, we prove optimal interpolation error estimates for Lagrange triangular finite elements of arbitrary order under the maximal angle condition in a unified and simple way. The key estimate is only an application of the Bramble-Hilbert lemma.

Mao, Shipeng; Shi, Zhongci

2009-08-01

284

Branch length estimation and divergence dating: estimates of error in Bayesian and maximum likelihood frameworks  

PubMed Central

Background Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset size, branch length heterogeneity, branch depth, and analytical framework on branch length estimation across a range of branch lengths. We then reanalyzed an empirical dataset for plethodontid salamanders to determine how inaccurate branch length estimation can affect estimates of divergence dates. Results The accuracy of branch length estimation varied with branch length, dataset size (both number of taxa and sites), branch length heterogeneity, branch depth, dataset complexity, and analytical framework. For simple phylogenies analyzed in a Bayesian framework, branches were increasingly underestimated as branch length increased; in a maximum likelihood framework, longer branch lengths were somewhat overestimated. Longer datasets improved estimates in both frameworks; however, when the number of taxa was increased, estimation accuracy for deeper branches was less than for tip branches. Increasing the complexity of the dataset produced more misestimated branches in a Bayesian framework; however, in an ML framework, more branches were estimated more accurately. Using ML branch length estimates to re-estimate plethodontid salamander divergence dates generally resulted in an increase in the estimated age of older nodes and a decrease in the estimated age of younger nodes. Conclusions Branch lengths are misestimated in both statistical frameworks for simulations of simple datasets. However, for complex datasets, length estimates are quite accurate in ML (even for short datasets), whereas few branches are estimated accurately in a Bayesian framework. Our reanalysis of empirical data demonstrates the magnitude of effects of Bayesian branch length misestimation on divergence date estimates. Because the length of branches for empirical datasets can be estimated most reliably in an ML framework when branches are <1 substitution/site and datasets are ?1 kb, we suggest that divergence date estimates using datasets, branch lengths, and/or analytical techniques that fall outside of these parameters should be interpreted with caution.

2010-01-01

285

Analysis of radar altitude estimation error using radio refractive index in Korea  

Microsoft Academic Search

This paper presents statistical analysis of radar altitude estimation error using the radio refractive index (RRI), N, in Korea. RRIs are derived from two stations, Sokcho and Baengnyeongdo, which are placed in the east and west of Korea on the same latitude, respectively. The seasonal mean values of the RRI are large in summer and small in winter, and regional

SungHoon Moon; Hyun Wook Moon; Jae Rim Oh; Young Joong Yoon; Jong Hyun Lee

2011-01-01

286

A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series  

ERIC Educational Resources Information Center

|Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

2011-01-01

287

A Cascade Algorithm for Estimating and Compensating Motion Error for Synthetic Aperture Sonar Imaging  

Microsoft Academic Search

This paper presents a three-level cascade algorithm for estimating and compensating platform motion errors during synthetic aperture sonar imaging. By using a multiple element receiving array, physical aperture images can be produced from each transmit burst. With proper spatial sampling, there is sufficient redundancy in successive physical aperture images to extract motion parameters that can be applied to the image

John M. Silkaitis; Brett L. Douglas; Hua Lee

1994-01-01

288

Error Estimates for the Numerical Approximation of a Semilinear Elliptic Control Problem  

Microsoft Academic Search

We study the numerical approximation of distributed nonlinear optimal control problems governed by semilinear elliptic partial differential equations with pointwise constraints on the control. Our main result are error estimates for optimal controls in the maximum norm. Characterization results are stated for optimal and discretized optimal control. Moreover, the uniform convergence of discretized controls to optimal controls is proven under

EDUARDO CASAS

2002-01-01

289

Error Estimates for the Numerical Approximation of Boundary Semilinear Elliptic Control Problems  

Microsoft Academic Search

We study the numerical approximation of boundary optimal control problems governed by semilinear elliptic partial differential equations with pointwise constraints on the control. The analysis of the approximate control problems is carried out. The uniform convergence of discretized controls to optimal controls is proven under natural assumptions by taking piecewise constant controls. Finally, error estimates are established and some numerical

Eduardo Casas; Mariano Mateos; Fredi TrÖltzsch

2005-01-01

290

Error Estimates for Interpolation by Compactly Supported Radial Basis Functions of Minimal Degree  

Microsoft Academic Search

We consider error estimates for interpolation by a special class of compactly supported radial basis functions. These functions consist of a univariate polynomial within their support and are of minimal degree depending on space dimension and smoothness. Their associated “native” Hilbert spaces are shown to be norm-equivalent to Sobolev spaces. Thus we can derive approximation orders for functions from Sobolev

Holger Wendland

1998-01-01

291

Error Estimates for Interpolation By Compactly Supported Radial Basis Functions of Minimal Degree  

Microsoft Academic Search

: We consider error estimates for the interpolation by a special class of compactlysupported radial basis functions. These functions consist of a univariate polynomialwithin their support and are of minimal degree depending on space dimension and smoothness.Their associated "native" Hilbert spaces are shown to be norm-equivalent to Sobolevspaces. Thus we can derive approximation orders for functions from Sobolev spaces which

Holger Wendland

1997-01-01

292

User-Defined Expected Error Rate in OCR Postprocessing by Means of Automatic Threshold Estimation  

Microsoft Academic Search

In this work, a method for the automatic estimation of a threshold that allows the user of an OCR system to define an expected error rate is presented. When the OCR output is post-processed using a language model, a probability, a reliability index (or a “transformation cost”) is usually obtained, reflecting the likelihood (or its inverse) that the string of

J. Ramon Navarro-Cerdan; Joaquim Arlandis; Juan Carlos Pérez-Cortes; Rafael Llobet

2010-01-01

293

Opening the Black Box: Current (and Future) Error-Estimation Techniques in CIAO  

NASA Astrophysics Data System (ADS)

Many astronomers use programs in standard packages to fit parametrized models to their data and to estimate the errors for each model parameter value. But how does one determine if a chosen error-estimation method will yield statistically valid results? In this talk, we review error-estimation techniques, and discuss the conditions that must be met for each to be used properly. We begin with standard frequentist methods currently available to users of Sherpa, the fitting and modeling program of the CIAO software package: uncertainty (varying one parameter's value with all other parameter values fixed); projection (varying one parameter's value with all other parameter values allowed to float to new best-fit values); and covariance (estimating errors using the covariance matrix). We then examine other methods that we plan to incorporate into future versions of CIAO. These include likelihood-based, non-parametric techniques that deal with censored data (survival analysis), as well as Bayesian-based methods such as marginalization (the integration of the likelihood surface in parameter space) and Markov-Chain Monte Carlo (MCMC), the latter of which is especially suitable for complex problems that do not lend themselves to analytic solutions.

Freeman, P. E.; Kashyap, V.; Siemiginowska, A.

2000-12-01

294

Utilization of the radar polarimetric covariance matrix for polarization error and precipitation canting angle estimation  

Microsoft Academic Search

The 3×3 radar polarimetric covariance matrix provides a complete set of measurements from distributed particles. Via optimum polarization theory, radar system polarization errors and particle orientation distribution parameters can be estimated. While the theory of covariance matrices and associated optimum polarizations for random media have been known for some time, the application to retrieval of microphysical information of precipitation is

J. C. Hubbert; V. N. Bringi

2003-01-01

295

Error estimate for the upwind finite volume method for nonlinear scalar conservation law  

Microsoft Academic Search

In this paper we estimate the error of upwind first order finite volume schemes applied to scalar conservation laws. As a first step we consider stan- dard upwind and flux finite volume scheme discretization of a linear equation with space variable coefficients in conservation form. We prove that, in spite of their lack of consistency, both schemes lead to a

Daniel Bouchea; Jean-Michel Ghidagliab; Frederic P. Pascal; Cachan Cedex

296

A reduced successive quadratic programming strategy for errors-in-variables estimation  

Microsoft Academic Search

Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and

I.-B. Tjoa; L. T. Biegler

1992-01-01

297

Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis  

NASA Astrophysics Data System (ADS)

We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics ``Enrico Fermi'', Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

Jones, Reese E.; Mandadapu, Kranthi K.

2012-04-01

298

Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.  

PubMed

We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently. PMID:22519310

Jones, Reese E; Mandadapu, Kranthi K

2012-04-21

299

Error and bias in under-5 mortality estimates derived from birth histories with small sample sizes  

PubMed Central

Background Estimates of under-5 mortality at the national level for countries without high-quality vital registration systems are routinely derived from birth history data in censuses and surveys. Subnational or stratified analyses of under-5 mortality could also be valuable, but the usefulness of under-5 mortality estimates derived from birth histories from relatively small samples of women is not known. We aim to assess the magnitude and direction of error that can be expected for estimates derived from birth histories with small samples of women using various analysis methods. Methods We perform a data-based simulation study using Demographic and Health Surveys. Surveys are treated as populations with known under-5 mortality, and samples of women are drawn from each population to mimic surveys with small sample sizes. A variety of methods for analyzing complete birth histories and one method for analyzing summary birth histories are used on these samples, and the results are compared to corresponding true under-5 mortality. We quantify the expected magnitude and direction of error by calculating the mean error, mean relative error, mean absolute error, and mean absolute relative error. Results All methods are prone to high levels of error at the smallest sample size with no method performing better than 73% error on average when the sample contains 10 women. There is a high degree of variation in performance between the methods at each sample size, with methods that contain considerable pooling of information generally performing better overall. Additional stratified analyses suggest that performance varies for most methods according to the true level of mortality and the time prior to survey. This is particularly true of the summary birth history method as well as complete birth history methods that contain considerable pooling of information across time. Conclusions Performance of all birth history analysis methods is extremely poor when used on very small samples of women, both in terms of magnitude of expected error and bias in the estimates. Even with larger samples there is no clear best method to choose for analyzing birth history data. The methods that perform best overall are the same methods where performance is noticeably different at different levels of mortality and lengths of time prior to survey. At the same time, methods that perform more uniformly across levels of mortality and lengths of time prior to survey also tend to be among the worst performing overall.

2013-01-01

300

Systematic evaluation of errors occurring during the preparation of intravenous medication  

Microsoft Academic Search

Introduction: Errors in the concentration of intravenous medications are not uncommon. We evaluated steps in the infusion-preparation process to identify factors associated with preventable medication errors. Methods: We included 118 health care professionals who would be involved in the preparation of intravenous medication infu- sions as part of their regular clinical activities. Participants per- formed 5 infusion-preparation tasks (drug-volume calculation,

Christopher S. Parshuram; Winnie Seto; Angela Trope; Gideon Koren MBBS; Andreas Laupacis

2008-01-01

301

Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error.  

PubMed

In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case. PMID:21687809

Carroll, Raymond J; Delaigle, Aurore; Hall, Peter

2011-03-01

302

Patients' willingness and ability to participate actively in the reduction of clinical errors: a systematic literature review.  

PubMed

This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799

Doherty, Carole; Stavropoulou, Charitini

2012-04-13

303

Elimination of computational systematic errors and improvements of weather and climate system models in relation to baroclinic primitive equations  

NASA Astrophysics Data System (ADS)

The design of a total energy conserving semi-implicit scheme for the multiple-level baroclinic primitive equation has remained an unsolved problem for a long time. In this work, however, we follow an energy perfect conserving semi-implicit scheme of a European Centre for Medium-Range Weather Forecasts (ECMWF) type sigma-coordinate primitive equation which has recently successfully formulated. Some real-data contrast tests between the model of the new conserving scheme and that of the ECMWF-type of global spectral semi-implicit scheme show that the RMS error of the averaged forecast Height at 850 hPa can be clearly improved after the first integral week. The reduction also reaches 50 percent by the 30th day. Further contrast tests demonstrate that the RMS error of the monthly mean height in the middle and lower troposphere also be largely reduced, and some well-known systematical defects can be greatly improved. More detailed analysis reveals that part of the positive contributions comes from improvements of the extra-long wave components. This indicates that a remarkable improvement of the model climate drift level can be achieved by the actual realizing of a conserving time-difference scheme, which thereby eliminates a corresponding computational systematic error source/ sink found in the currently-used traditional type of weather and climate system models in relation to the baroclinic primitive equations.

Zhong, Q.; Chen, J. T.; Sun, Z. L.

2002-11-01

304

An improved approach for estimating observation and model error parameters in soil moisture data assimilation  

NASA Astrophysics Data System (ADS)

The accurate specification of observing and/or modeling error statistics presents a remaining challenge to the successful implementation of many land data assimilation systems. Recent work has developed adaptive filtering approaches that address this issue. However, such approaches possess a number of known weaknesses, including a required assumption of serially uncorrelated error in assimilated observations. Recent validation results for remotely sensed surface soil moisture retrievals call this assumption into question. Here we propose and test an alternative system for tuning a soil moisture data assimilation system, which is robust to the presence of autocorrelated observing error. The approach is based on the application of a triple collocation approach to estimate the error variance of remotely sensed surface soil moisture retrievals. Using this estimate, the variance of assumed modeling perturbations is tuned until normalized filtering innovations have a temporal variance of one. Real data results over three highly instrumented watershed sites in the United States demonstrate that this approach is superior to a classical tuning strategy based on removing the serial autocorrelation in Kalman filtering innovations and nearly as accurate as a calibrated Colored Kalman filter in which autocorrelated observing errors are treated optimally.

Crow, W. T.; van den Berg, M. J.

2010-12-01

305

Estimation of local error by a neural model in an inverse scattering problem  

NASA Astrophysics Data System (ADS)

Characterization of optical gratings by resolution of inverse scattering problem has become a widely used tool. Indeed, it is known as a non-destructive, rapid and non-invasive method in opposition with microscopic characterizations. Use of a neural model is generally implemented and has shown better results by comparison with other regression methods. The neural network learns the relationship between the optical signature and the corresponding profile shape. The performance of such a non-linear regression method is usually estimated by the root mean square error calculated on a data set not involved in the training process. However, this error estimation is not very significant and tends to flatten the error in the different areas of variable space. We introduce, in this paper, the calculation of local error for each geometrical parameter representing the profile shape. For this purpose a second neural network is implemented to learn the variance of results obtained by the first one. A comparison with the root mean square error confirms a gain of local precision. Finally, the method is applied in the optical characterization of a semi-conductor grating with a 1 ? m period.

Robert, S.; Mure-Rauvaud, A.; Thiria, S.; Badran, F.

2005-07-01

306

An error estimation method for precipitation and temperature projections for future climates  

NASA Astrophysics Data System (ADS)

Projections of precipitation and temperature from Global Climate Models (GCMs) are generally the basis for assessment of the impact of climate change on water resources. The reliability of such assessments, however, is questionable, since GCM projections are subject to uncertainties arising from inaccuracies in the models, greenhouse gas emission scenarios, and initial conditions (or ensemble runs) used. The purpose of the present study is to quantify these sources of uncertainties in future precipitation and temperature projections from GCMs. To this end, we propose a method to estimate a measure of the associated uncertainty (or error), the square root of error variance (SREV), that varies with space and time as a function of the GCM being assessed. The method is applied to estimate uncertainty in monthly precipitation and temperature outputs from six GCMs for the period 2001-2099. The results indicate that, for both precipitation and temperature, uncertainty due to model structure is the largest source of uncertainty. Scenario uncertainly increases, especially for temperature, in future due to divergence of the three emission scenarios analyzed. It is also found that ensemble run uncertainty is more important in precipitation simulation than in temperature simulation. Estimation of uncertainty in both space and time sheds lights on the spatial and temporal patterns of uncertainties in GCM outputs. The generality of this error estimation method also allows its use for uncertainty estimation in any other output from GCMs, providing an effective platform for risk-based assessments of any alternate plans or decisions that may be formulated using GCM simulations.

Woldemeskel, F. M.; Sharma, A.; Sivakumar, B.; Mehrotra, R.

2012-11-01

307

A Mediterranean sea level reconstruction (1950-2008) with error budget estimates  

NASA Astrophysics Data System (ADS)

Reconstructed sea level fields are commonly obtained by using techniques that combine long-term records from coastal and island tide gauges with spatial covariance structures determined from recent altimetric observations. In this paper we estimate the error budget of the Mediterranean sea level reconstructions based on a reduced space optimal interpolation. In particular, we characterize the baseline error of the methodology, which is linked to the capacity of tide gauges to capture open sea processes and to the representativity of the selected EOFs. Also, we analyze the impact of the non-stationarity of the EOFs and the uneven tide gauge spatial distribution. Results suggest that the baseline error is the dominant contribution in most areas of the Mediterranean (average value of 2.7 cm). In particular, the error due to the truncation of the EOFs is the largest contribution to the baseline error. The other error sources have a more localized impact, which can be important in certain areas with atypical mesoscale activity. The skills of the reconstruction are more dependent on the length of the period than on the particular years used to compute the EOFs. Redundant tide gauges improve the reconstruction only slightly while a single tide gauge at a critical location improves it significantly. In addition we estimate the total error linked to all sources of uncertainty. Finally, we also present an updated sea level reconstruction which includes several improvements with respect to previous reconstructions. The comparison with independent data shows that this new reconstruction provides better results with respect to previous products.

Calafat, F. M.; Jordà, G.

2011-10-01

308

Accurate and fast methods to estimate the population mutation rate from error prone sequences  

PubMed Central

Background The population mutation rate (?) remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others). Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences). This approach is first used to derive independently the same new Watterson and Tajima estimators of ?, as recently reported by Achaz [1] for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto [2], which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of ? from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

2009-01-01

309

Doubly quasi-consistent parallel explicit peer methods with built-in global error estimation  

NASA Astrophysics Data System (ADS)

Recently, Kulikov presented the idea of double quasi-consistency, which facilitates global error estimation and control, considerably. More precisely, a local error control implemented in such methods plays a part of global error control at the same time. However, Kulikov studied only Nordsieck formulas and proved that there exists no doubly quasi-consistent scheme among those methods. Here, we prove that the class of doubly quasi-consistent formulas is not empty and present the first example of such sort. This scheme belongs to the family of superconvergent explicit two-step peer methods constructed by Weiner, Schmitt, Podhaisky and Jebens. We present a sample of s-stage doubly quasi-consistent parallel explicit peer methods of order s-1 when s=3. The notion of embedded formulas is utilized to evaluate efficiently the local error of the constructed doubly quasi-consistent peer method and, hence, its global error at the same time. Numerical examples of this paper confirm clearly that the usual local error control implemented in doubly quasi-consistent numerical integration techniques is capable of producing numerical solutions for user-supplied accuracy conditions in automatic mode.

Kulikov, G. Yu; Weiner, R.

2010-03-01

310

Error and stability estimates for surface-divergence free RBF interpolants on the sphere  

NASA Astrophysics Data System (ADS)

Recently, a new class of surface-divergence free radial basis function interpolants has been developed for surfaces in mathbb{R}^3 . In this paper, several approximation results for this class of interpolants will be derived in the case of the sphere, mathbb{S}^2 . In particular, Sobolev-type error estimates are obtained, as well as optimal stability estimates for the associated interpolation matrices. In addition, a Bernstein estimate and an inverse theorem are also derived. Numerical validation of the theoretical results is also given.

Fuselier, Edward J.; Narcowich, Francis J.; Ward, Joseph D.; Wright, Grady B.

2009-12-01

311

Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates  

SciTech Connect

Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

2009-08-01

312

Application of functional error estimates with mixed approximations to plane problems of linear elasticity  

NASA Astrophysics Data System (ADS)

S.I. Repin and his colleagues' studies addressing functional a posteriori error estimates for solutions of linear elasticity problems are further developed. Although the numerical results obtained for planar problems by A.V. Muzalevsky and Repin point to advantages of the adaptive approach used, the degree of overestimation of the absolute error increases noticeably with mesh refinement. This shortcoming is eliminated by using approximations typical of mixed finite element methods. A comparative analysis is conducted for the classical finite element approximations, mixed Raviart-Thomas approximations, and relatively recently proposed Arnold-Boffi-Falk mixed approximations. It is shown that the last approximations are the most efficient.

Frolov, M. E.

2013-07-01

313

Extended scene Shack-Hartmann wavefront sensor algorithm: minimization of scene content dependent shift estimation errors.  

PubMed

An adaptive periodic-correlation (APC) algorithm was developed for use in extended-scene Shack-Hartmann wavefront sensors. It provides high accuracy even when the subimages in a frame captured by a Shack-Hartmann camera are not only shifted but also distorted relative to each other. Recently we found that the shift estimate error of the APC algorithm has a component that depends on the content of the extended scene. In this paper, we assess the amount of that error and propose a method to minimize it. PMID:24085124

Sidick, Erkin

2013-09-10

314

Hyperbolic conservation laws on manifolds. An error estimate for finite volume schemes  

Microsoft Academic Search

Following Ben-Artzi and LeFloch, we consider nonlinear hyperbolic conservation laws posed on a Riemannian manifold, and we\\u000a establish an L\\u000a 1-error estimate for a class of finite volume schemes allowing for the approximation of entropy solutions to the initial value\\u000a problem. The error in the L\\u000a 1 norm is of order h\\u000a 1\\/4 at most, where h represents the maximal

Philippe G. LeFloch; Baver Okutmustur; Wladimir Neves

2009-01-01

315

Estimating low resolution gravity fields at short time intervals to reduce temporal aliasing errors  

NASA Astrophysics Data System (ADS)

The Gravity Recovery and Climate Experiment (GRACE) satellite mission has been estimating temporal changes in the Earth's gravitational field since its launch in 2002. While it is not yet fully resolved what the limiting source of error is for GRACE, studies on future missions have shown that temporal aliasing errors due to undersampling signals of interest (such as hydrological variations) and errors in atmospheric, ocean, and tide models will be a limiting source of error for missions taking advantage of improved technologies (flying drag-free with a laser interferometer). This paper explores the option of reducing the effects of temporal aliasing errors by directly estimating low degree and order gravity fields at short time intervals, ultimately resulting in data products with improved spatial resolution. Three potential architectures are considered: a single pair of polar orbiting satellites, two pairs of polar orbiting satellites, and a polar orbiting pair of satellites coupled with a lower inclined pair of satellites. Results show that improvements in spatial resolution are obtained when one estimates a low resolution gravity field every two days for the case of a single pair of satellites, and every day for the case of two polar pairs of satellites. However, the spatial resolution for these cases is still lower than that provided by simply destriping and smoothing the solutions via standard GRACE post-processing techniques. Alternately, estimating daily gravity fields for the case of a polar pair of satellites coupled with a lower inclined pair results in solutions with superior spatial resolution than that offered by simply destriping and smoothing the solutions.

Wiese, David N.; Visser, Pieter; Nerem, Robert S.

2011-09-01

316

Systematic Review and Harmonization of Life Cycle GHG Emission Estimates for Electricity Generation Technologies (Presentation)  

SciTech Connect

This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.

Heath, G.

2012-06-01

317

Macroscale water fluxes 1. Quantifying errors in the estimation of basin mean precipitation  

NASA Astrophysics Data System (ADS)

Developments in analysis and modeling of continental water and energy balances are hindered by the limited availability and quality of observational data. The lack of information on error characteristics of basin water supply is an especially serious limitation. Here we describe the development and testing of methods for quantifying several errors in basin mean precipitation, both in the long-term mean and in the monthly and annual anomalies. To quantify errors in the long-term mean, two error indices are developed and tested with positive results. The first provides an estimate of the variance of the spatial sampling error of long-term basin mean precipitation obtained from a gauge network, in the absence of orographic effects; this estimate is obtained by use only of the gauge records. The second gives a simple estimate of the basin mean orographic bias as a function of the topographic structure of the basin and the locations of gauges therein. Neither index requires restrictive statistical assumptions (such as spatial homogeneity) about the precipitation process. Adjustments of precipitation for gauge bias and estimates of the adjustment errors are made by applying results of a previous study. Additionally, standard correlation-based methods are applied for the quantification of spatial sampling errors in the estimation of monthly and annual values of basin mean precipitation. These methods also perform well, as indicated by network subsampling tests in densely gauged basins. The methods are developed and applied with data for 175 large (median area of 51,000 km2) river basins of the world for which contemporaneous, continuous (missing fewer than 2% of data values), long-term (median record length of 54 years) river discharge records are also available. Spatial coverage of the resulting river basin data set is greatest in the middle latitudes, though many basins are located in the tropics and the high latitudes, and the data set spans the major climatic and vegetation zones of the world. This new data set can be applied in diagnostic and theoretical studies of water balance of large basins and in the evaluation of performance of global models of land water balance.

Milly, P. C. D.; Dunne, K. A.

2002-10-01

318

Analysis of systematic errors in lateral shearing interferometry for EUV optical testing  

SciTech Connect

Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.

Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.

2009-02-24

319

Systematic parameter errors in binary neutron star inspirals: effects of spin, tides, and high post-Newtonian order terms  

NASA Astrophysics Data System (ADS)

The coalescence of two neutron stars is one of the most important sources for LIGO, Virgo, and other advanced ground-based detectors. Based on a post-Newtonian description of the inspiralling binary, it is generally believed that we will be able to precisely measure the masses of the two neutron stars, and potentially measure (with much less precision) the Love numbers characterizing their tidal distortion (and encoding information about the neutron star radius and equation of state). However, this belief ignores the effects of uncertainties in our knowledge of the waveform. These uncertainties (e.g., the finite order to which we know the post-Newtonian series) can cause a significant systematic offset in the values of the parameters that we extract. I will discuss calculations of these systematic parameter errors for a variety of scenarios.

Favata, Marc

2013-04-01

320

A heteroskedastic error covariance matrix estimator using a first-order conditional autoregressive Markov simulation for deriving asympotical efficient estimates from ecological sampled Anopheles arabiensis aquatic habitat covariates  

Microsoft Academic Search

BACKGROUND: Autoregressive regression coefficients for Anopheles arabiensis aquatic habitat models are usually assessed using global error techniques and are reported as error covariance matrices. A global statistic, however, will summarize error estimates from multiple habitat locations. This makes it difficult to identify where there are clusters of An. arabiensis aquatic habitats of acceptable prediction. It is therefore useful to conduct

Benjamin G Jacob; Daniel A Griffith; Ephantus J Muturi; Erick X Caamano; John I Githure; Robert J Novak

2009-01-01

321

DTI quality control assessment via error estimation from Monte Carlo simulations  

NASA Astrophysics Data System (ADS)

Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.

Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

2013-03-01

322

Mass load estimation errors utilizing grab sampling strategies in a karst watershed  

USGS Publications Warehouse

Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

Fogle, A. W.; Taraba, J. L.; Dinger, J. S.

2003-01-01

323

Error estimation for moment analysis in heavy-ion collision experiment  

NASA Astrophysics Data System (ADS)

Higher moments of conserved quantities are predicted to be sensitive to the correlation length and connected to the thermodynamic susceptibility. Thus, higher moments of net-baryon, net-charge and net-strangeness have been extensively studied theoretically and experimentally to explore phase structure and bulk properties of QCD matters created in a heavy-ion collision experiment. As the higher moment analysis is a statistic hungry study, the error estimation is crucial to extract physics information from the limited experimental data. In this paper, we will derive the limit distributions and error formula based on the delta theorem in statistics for various order moments used in the experimental data analysis. The Monte Carlo simulation is also applied to test the error formula.

Luo, Xiaofeng

2012-02-01

324

A posteriori error estimations of a coupled mixed and standard Galerkin method for second order operators  

NASA Astrophysics Data System (ADS)

In this paper, we consider a discretization method proposed by Wieners and Wohlmuth [The coupling of mixed and conforming finite element discretizations, in: Domain Decomposition Methods, vol. 10, Contemporary Mathematics, vol. 218, American Mathematical Society, Providence RI, 1998, pp. 547-554] (see also [R.D. Lazarov, J. Pasciak, P.S. Vassilevski, Iterative solution of a coupled mixed and standard Galerkin discretization method for elliptic problems, Numer. Linear Algebra Appl. 8 (2001) 13-31]) for second order operators, which is a coupling between a mixed method in a subdomain and a standard Galerkin method in the remaining part of the domain. We perform an a posteriori error analysis of residual type of this method, by combining some arguments from a posteriori error analysis of Galerkin methods and mixed methods. The reliability and efficiency of the estimator are proved. Some numerical tests are presented and confirm the theoretical error bounds.

Creuse, Emmanuel; Nicaise, Serge

2008-03-01

325

Evaluation of Temporal and Spatial Distribution of Error in Modeled Evapotranspiration Estimates  

NASA Astrophysics Data System (ADS)

Evapotranspiration (ET) constitutes a significant portion of Florida's water budget, and is second only to rainfall. Consequently, accurate ET estimates are very important for hydrologic modeling work. However, in comparison to rainfall, relatively few ground stations exist for the measurement of this important model input. Consequently, ET estimates produced by models are often subject to error. Satellite-based ET estimates provide an unprecedented opportunity to measure actual ET in sparsely monitored watersheds. They also provide a basis for comparing errors in modeled actual ET estimates that are induced due to the following reasons: 1) spatial interpolation and data-filling methods; 2) inaccurate and sparse meteorological data; and, 3) simplified parameterization schemes. In this study, satellite-based daily actual ET estimates from the Water Conservation Area 3 (WCA-3) watershed in South Florida, USA, are compared with those obtained from a calibrated finite-volume regional hydrologic model for the 1998 and 1999 calendar years. The satellite-based ET estimates used in this study compared well with measured ground-based actual ET data. The WCA-3 watershed is an integral part of Florida's remnant Everglades, and covers an area of approximately 2,400 square kilometers. It is compartmentalized by several levees and road embankments, and drained by several major canals. It also serves as a major habitat for many wildlife species, a source for urban water supply and an emergency storage area for flood water. The WCA-3 is located east of the Big Cypress National Preserve, and north of the Everglades National Park. Despite its significance, WCA-3 has relatively few ET monitoring stations and meteorological stations. Consequently, it is ideally suited for evaluating and quantifying errors in simulated actual ET estimates. The Regional Simulation Model (RSM) developed by the South Florida Water Management District is used for the modeling of these ET estimates. The RSM is an implicit, finite-volume, continuous, distributed, integrated surface/ground-water model, capable of simulating one-dimensional canal/stream flow and two-dimensional overland flow in arbitrarily shaped areas using a variable triangular mesh. The RSM has several options for modeling actual ET. An empirical parameterization scheme that is dependent on land-cover, water-depth and potential ET is used in this study for estimating actual ET. The parameter-sensitivities of this scheme are investigated and analyzed for several predominant land-cover classes, and dry- and wet-soil conditions. The RSM is calibrated and verified using historical time-series data from 1988 to 1995, and 1996 to 2000, respectively. All sensitivity and error analyses are conducted using estimates from the verification period.

Senarath, S. U.

2004-12-01

326

MINING ERROR VARIANCE AND HITTING PAY-DIRT:. Discovering Systematic Variation in Social Sentiments  

Microsoft Academic Search

According to traditional error theory, sentiment measurements vary unsystematically from individual to individual. However, we find some patterned deviation in sentiments that characterize subsets of respondents within a seemingly homogeneous population. After demonstrating the existence of such patterns, we report an exploratory study aimed at identifying social characteristics of people with different patterns of sentiments. People embedded in multiple social

Lisa Thomas; David R. Heise

1995-01-01

327

Study of the effect of beam spreading on systematic Doppler flow measurement errors  

Microsoft Academic Search

Doppler ultrasonic flowmeters are based on a single ray approximation. We have studied the effect of beam spreading on the systematic Doppler flow measurement by developing a theoretical ray tracing model and validating the same with experiments. This paper will discuss experimental work and the use of ray tracing and finite element models to investigate this effect. This paper indicates

D. V Mahadeva; S. M. Huang; G. Oddie; R. C. Baker

2010-01-01

328

Propagation of Blood Function Errors to the Estimates of Kinetic Parameters with Dynamic PET  

PubMed Central

Dynamic PET, in contrast to static PET, can identify temporal variations in the radiotracer concentration. Mathematical modeling of the tissue of interest in dynamic PET can be simplified using compartment models as a linear system where the time activity curve of a specific tissue is the convolution of the tracer concentration in the plasma and the impulse response of the tissue containing kinetic parameters. Since the arterial sampling of blood to acquire the value of tracer concentration is invasive, blind methods to estimate both blood input function and kinetic parameters have recently drawn attention. Several methods have been developed, but the effect of accuracy of the estimated blood function on the estimation of the kinetic parameters is not studied. In this paper, we present a method to compute the error in the kinetic parameter estimates caused by the error in the blood input function. Computer simulations show that analytical expressions we derive are sufficiently close to results obtained from numerical methods. Our findings are important to observe the effect of the blood function on kinetic parameter estimation, but also useful to evaluate various blind methods and observe the dependence of kinetic parameter estimates to certain parts of the blood function.

Cheng, Yafang; Yetik, Imam Samil

2011-01-01

329

Robust detection and verification of linear relationships to generate metabolic networks using estimates of technical errors  

PubMed Central

Background The size and magnitude of the metabolome, the ratio between individual metabolites and the response of metabolic networks is controlled by multiple cellular factors. A tight control over metabolite ratios will be reflected by a linear relationship of pairs of metabolite due to the flexibility of metabolic pathways. Hence, unbiased detection and validation of linear metabolic variance can be interpreted in terms of biological control. For robust analyses, criteria for rejecting or accepting linearities need to be developed despite technical measurement errors. The entirety of all pair wise linear metabolic relationships then yields insights into the network of cellular regulation. Results The Bayesian law was applied for detecting linearities that are validated by explaining the residues by the degree of technical measurement errors. Test statistics were developed and the algorithm was tested on simulated data using 3–150 samples and 0–100% technical error. Under the null hypothesis of the existence of a linear relationship, type I errors remained below 5% for data sets consisting of more than four samples, whereas the type II error rate quickly raised with increasing technical errors. Conversely, a filter was developed to balance the error rates in the opposite direction. A minimum of 20 biological replicates is recommended if technical errors remain below 20% relative standard deviation and if thresholds for false error rates are acceptable at less than 5%. The algorithm was proven to be robust against outliers, unlike Pearson's correlations. Conclusion The algorithm facilitates finding linear relationships in complex datasets, which is radically different from estimating linearity parameters from given linear relationships. Without filter, it provides high sensitivity and fair specificity. If the filter is activated, high specificity but only fair sensitivity is yielded. Total error rates are more favorable with deactivated filters, and hence, metabolomic networks should be generated without the filter. In addition, Bayesian likelihoods facilitate the detection of multiple linear dependencies between two variables. This property of the algorithm enables its use as a discovery tool and to generate novel hypotheses of the existence of otherwise hidden biological factors.

Kose, Frank; Budczies, Jan; Holschneider, Matthias; Fiehn, Oliver

2007-01-01

330

Measurement Error in Prenatal Care Utilization: Evidence of Attenuation Bias in the Estimation of Impact on Birth Weight  

Microsoft Academic Search

Objective: Errors in the measurement of the timing and number of prenatal care visits may produce downward bias in estimates of the impact of prenatal care use on birth outcomes. This paper examines the extent of attenuation bias from measurement error in the estimation of the effect of prenatal care use on birth weight. Methods: Data were analyzed from the

John R. Penrod; Paula M. Lantz

2000-01-01

331

Numerical solution of a class of Integro-Differential equations by the Tau Method with an error estimation  

Microsoft Academic Search

The Tau Method, by construction, produces approximate polynomial solutions of differential equations. The purpose of this paper is to extend the Tau Method to the Integro-Differential equations. An efficient error estimation for the Tau method is also introduced. Details of this method are presented and some numerical results along with estimated errors are given to clarify the method and its

S. M. Hosseini; S. Shahmorad

2003-01-01

332

A suggested method of estimation for spatial interdependent models with autocorrelated errors, and an application to a county expenditure model  

Microsoft Academic Search

The purpose of this paper is two-fold. First, we describe an estimation procedure that should be useful for spatial models which contain interactions between the dependent variables and autocorrelated error terms. Second, we apply that procedure to a spatial model relating to county police expenditures. Our estimation procedure does not require the specification of the error distribution, and its computational

Harry H. Kelejian; Dennis P. Robinson

1993-01-01

333

Random and systematic errors of PVT/MS-TAM (Tritium Analysis Meter) measurements of tritium gas: Evaluation and use in material balance calculations  

SciTech Connect

Propagation-of-error calculations were performed for material balance areas for processes involving the handling of bulk quantities of tritium gas. Random and systematic error components were obtained from pressure, volume, temperature, and isotopic measurements performed by Mound measurement control and calibration programs. The resultant error components were used to determine the uncertainties in determinations of physical inventories and material flow through material balance areas.

Rudy, C.R.

1986-12-22

334

Estimation of the Error in Carbon Dioxide Column Abundances Retrieved from GOSAT Data  

Microsoft Academic Search

In this chapter, the estimation of error in the method used to retrieve carbon dioxide (CO2) column abundances obtained from the Greenhouse Gases Observing Satellite (GOSAT) is presented. GOSAT will be the first satellite\\u000a dedicated to primarily observe CO2 and methane (CH4), which are considered to be major greenhouse gases, from space. The column abundances of CO2 and CH4 can

Mitsuhiro Tomosada; Koji Kanefuji; Yukio Matsumoto; Hiroe Tsubaki; Tatsuya Yokota

335

A priori error estimates of variational-difference methods for Hencky plasticity problems  

Microsoft Academic Search

In this article, convergence of equilibrium finite-element approximations for variational problems of the Hencky plasticity\\u000a is analyzed. To obtain a priori error estimates, two regularized problems are considered and additional differentiability\\u000a properties of their solutions are investigated. This allows us to prove that there is a relation between the parameters of\\u000a regularization and sampling such that equilibrium approximations of the

S. I. Repin

1997-01-01

336

Bias of heritability estimates from twin data in presence of errors of zygosity determination  

Microsoft Academic Search

Simple heritability estimators of continuous as well as discrete traits from twin data are known to overestimate the degree\\u000a of genetic determination of the measured traits for several reasons. Errors of zygosity determination will, however, underestimate\\u000a the true heritability. The bias due to wrong assignment of dizygous twin pairs into monozygous type is evaluated here, and\\u000a the results indicate that

R. Chakraborty

1990-01-01

337

Assessing the uncertainties on seismic source parameters: Towards realistic error estimates for centroid-moment-tensor determinations  

NASA Astrophysics Data System (ADS)

The centroid-moment-tensor (CMT) algorithm provides a straightforward, rapid method for the determination of seismic source parameters from waveform data. As such, it has found widespread application, and catalogues of CMT solutions - particularly the catalogue maintained by the Global CMT Project - are routinely used by geoscientists. However, there have been few attempts to quantify the uncertainties associated with any given CMT determination: whilst catalogues typically quote a 'standard error' for each source parameter, these are generally accepted to significantly underestimate the true scale of uncertainty, as all systematic effects are ignored. This prevents users of source parameters from properly assessing possible impacts of this uncertainty upon their own analysis. The CMT algorithm determines the best-fitting source parameters within a particular modelling framework, but any deficiencies in this framework may lead to systematic errors. As a result, the minimum-misfit source may not be equivalent to the 'true' source. We suggest a pragmatic solution to uncertainty assessment, based on accepting that any 'low-misfit' source may be a plausible model for a given event. The definition of 'low-misfit' should be based upon an assessment of the scale of potential systematic effects. We set out how this can be used to estimate the range of values that each parameter might take, by considering the curvature of the misfit function as minimised by the CMT algorithm. This approach is computationally efficient, with cost similar to that of performing an additional iteration during CMT inversion for each source parameter to be considered. The source inversion process is sensitive to the various choices that must be made regarding dataset, earth model and inversion strategy, and for best results, uncertainty assessment should be performed using the same choices. Unfortunately, this information is rarely available when sources are obtained from catalogues. As already indicated by Valentine and Woodhouse (2010), researchers conducting comparisons between data and synthetic waveforms must ensure that their approach to forward-modelling is consistent with the source parameters used; in practice, this suggests that they should consider performing their own source inversions. However, it is possible to obtain rough estimates of uncertainty using only forward-modelling.

Valentine, Andrew P.; Trampert, Jeannot

2012-11-01

338

Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter  

NASA Astrophysics Data System (ADS)

The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).

Stephenson, Edward; Imig, Astrid

2009-10-01

339

Systematic reduction of sign errors in many-body calculations of atoms and molecules  

SciTech Connect

The self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B {\\bf 79} 195117 (2009), {\\it ibid.} {\\bf 80} 125110 (2009)] is applied to the calculation of ground state states of atoms and molecules. By direct comparison with accurate configuration interaction results we show that applying the SHDMC method to the oxygen atom leads to systematic convergence towards the exact ground state wave function. We present results for the small but challenging N$_2$ molecule, where results obtained via the energy minimization method and SHDMC are within experimental accuracy of 0.08 eV. Moreover, we demonstrate that the algorithm is robust enough to be used for the calculations of systems at least as large as C$_{20}$ starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the fermion sign problem in electronic structure calculations.

Bajdich, Michal [ORNL; Tiago, Murilo L [ORNL; Hood, Randolph Q. [Lawrence Livermore National Laboratory (LLNL); Kent, Paul R [ORNL; Reboredo, Fernando A [ORNL

2010-01-01

340

On Reducing Error Rate of Data Protected Using Systematic Unordered Codes in Asymmetric Channels  

Microsoft Academic Search

Berger-invert codes are coding schemes used to protect communication channels against all asymmetric errors and to decrease power consumption. This paper proposes a method of constructing modified Berger-invert codes that relies on the choice of check parts with the smallest possible total weight and assignment of low-weight check parts to the most numerous subsets of data with the largest Hamming

Stanislaw J. Piestrak

2010-01-01

341

[Error analysis of estimating terrestrial soil organic carbon storage in China].  

PubMed

The paper summarizes different estimating methods of soil organic carbon (SOC) storage including methods of soil taxonomy, vegetation types, Holdridge zones, correlative relationship and modeling. The error analysis of SOC calculation was introduced. Based on second national soil survey 2473 soil profiles, adopting soil taxonomy method and two kinds of SOC density, the terrestrial SOC storage in China was estimated. The range of SOC storage in China is between 615.19 and 1211.37 x 10(14) g and the average SOC density is between 10.49-10.53 kg.m-2(for average soil depth of 100 cm) or 11.52-12.04 kg.m-3(for average soil depth of 88 cm). Through estimation and error analysis, the average SOC storage is about 913.28 +/- 298.09 x 10(14) g and the uncertainty is among 20%-50%. Results showed that the differences of SOC estimation and sampling methods are important factors of SOC estimating uncertainties. PMID:12924144

Wang, Shaoqiang; Liu, Jiyuan; Yu, Guirui

2003-05-01

342

Application of least squares variance component estimation to errors-in-variables models  

NASA Astrophysics Data System (ADS)

In an earlier work, a simple and flexible formulation for the weighted total least squares (WTLS) problem was presented. The formulation allows one to directly apply the existing body of knowledge of the least squares theory to the errors-in-variables (EIV) models of which the complete description of the covariance matrices of the observation vector and of the design matrix can be employed. This contribution presents one of the well-known theories—least squares variance component estimation (LS-VCE)—to the total least squares problem. LS-VCE is adopted to cope with the estimation of different variance components in an EIV model having a general covariance matrix obtained from the (fully populated) covariance matrices of the functionally independent variables and a proper application of the error propagation law. Two empirical examples using real and simulated data are presented to illustrate the theory. The first example is a linear regression model and the second example is a 2-D affine transformation. For each application, two variance components—one for the observation vector and one for the coefficient matrix—are simultaneously estimated. Because the formulation is based on the standard least squares theory, the covariance matrix of the estimates in general and the precision of the estimates in particular can also be presented.

Amiri-Simkooei, A. R.

2013-10-01

343

Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error  

PubMed Central

This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n?1/(p?4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics.

Xue, Hongqi; Miao, Hongyu; Wu, Hulin

2010-01-01

344

A Comprehensive Aerological Reference Data Set (CARDS): Rough and Systematic Errors.  

NASA Astrophysics Data System (ADS)

The possibility of anthropogenic climate change and the possible problems associated with it are of great interest. However, one cannot study climate change without climate data. The Comprehensive Aerological Reference Data Set (CARDS) project will produce high-quality, daily upper-air data for the research community and for policy makers. CARDS intends to produce a dataset consisting of radiosonde and pibal data that is easy to use, as complete as possible, and as free of errors as possible. An attempt will be made to identify and correct biases in upper-air data whenever possible. This paper presents the progress made to date in achieving this goal.An advanced quality control procedure has been tested and implemented. It is capable of detecting and often correcting errors in geopotential height, temperature, humidity, and wind. This unique quality control method uses simultaneous vertical and horizontal cheeks of several meteorological variables. It can detect errors that other methods cannot.Research is being supported in the statistical detection of sudden changes in time series data. The resulting statistical technique has detected a known humidity bias in the U.S. data. The methods should detect unknown changes in instrumentation, station location, and data-reduction techniques. Software has been developed that corrects radiosonde temperatures, using a physical model of the temperature sensor and its changing environment. An algorithm for determining cloud cover for this physical model has been developed. A numerical check for station elevation based on the hydrostatic equations has been developed, which has identified documented and undocumented station moves. Considerable progress has been made toward the development of algorithms to eliminate a known bias in the U.S. humidity data.

Eskridge, Robert E.; Alduchov, Oleg A.; Chernykh, Irina V.; Panmao, Zhai; Polansky, Arthur C.; Doty, Stephen R.

1995-10-01

345

Spaceborne estimate of atmospheric CO2 column by use of the differential absorption method: error analysis.  

PubMed

For better knowledge of the carbon cycle, there is a need for spaceborne measurements of atmospheric CO2 concentration. Because the gradients are relatively small, the accuracy requirements are better than 1%. We analyze the feasibility of a CO2-weighted-column estimate, using the differential absorption technique, from high-resolution spectroscopic measurements in the 1.6- and 2-microm CO2 absorption bands. Several sources of uncertainty that can be neglected for other gases with less stringent accuracy requirements need to be assessed. We attempt a quantification of errors due to the radiometric noise, uncertainties in the temperature, humidity and surface pressure uncertainty, spectroscopic coefficients, and atmospheric scattering. Atmospheric scattering is the major source of error [5 parts per 10 (ppm) for a subvisual cirrus cloud with an assumed optical thickness of 0.03], and additional research is needed to properly assess the accuracy of correction methods. Spectroscopic data are currently a major source of uncertainty but can be improved with specific ground-based sunphotometry measurements. The other sources of error amount to several ppm, which is less than, but close to, the accuracy requirements. Fortunately, these errors are mostly random and will therefore be reduced by proper averaging. PMID:12833966

Dufour, Emmanuel; Bréon, François-Marie

2003-06-20

346

Inversion for mantle viscosity profiles constrained by dynamic topography and the geoid, and their estimated errors  

NASA Astrophysics Data System (ADS)

We perform a joint inversion of Earth's geoid and dynamic topography for radial mantle viscosity structure using a number of models of interior density heterogeneities, including an assessment of the error budget. We identify three classes of errors: those related to the density perturbations used as input, those due to insufficiently constrained observables, and those due to the limitations of our analytical model. We estimate the amplitudes of these errors in the spectral domain. Our minimization function weights the squared deviations of the compared quantities with the corresponding errors, so that the components with more reliability contribute to the solution more strongly than less certain ones. We develop a quasi-analytical solution for mantle flow in a compressible, spherical shell with Newtonian rheology, allowing for continuous radial variations of viscosity, together with a possible reduction of viscosity within the phase change regions due to the effects of transformational superplasticity. The inversion reveals three distinct families of viscosity profiles, all of which have an order of magnitude stiffening within the lower mantle, with a soft D'' layer below. The main distinction among the families is the location of the lowest-viscosity region-directly beneath the lithosphere, just above 400km depth or just above 670km depth. All profiles have a reduction of viscosity within one or more of the major phase transformations, leading to reduced dynamic topography, so that whole-mantle convection is consistent with small surface topography.

Panasyuk, Svetlana V.; Hager, Bradford H.

2000-12-01

347

Improving occupancy estimation when two types of observational error occur: Non-detection and species misidentification  

USGS Publications Warehouse

Efforts to draw inferences about species occurrence frequently account for false negatives, the common situation when individuals of a species are not detected even when a site is occupied. However, recent studies suggest the need to also deal with false positives, which occur when species are misidentified so that a species is recorded as detected when a site is unoccupied. Bias in estimators of occupancy, colonization, and extinction can be severe when false positives occur. Accordingly, we propose models that simultaneously account for both types of error. Our approach can be used to improve estimates of occupancy for study designs where a subset of detections is of a type or method for which false positives can be assumed to not occur. We illustrate properties of the estimators with simulations and data for three species of frogs. We show that models that account for possible misidentification have greater support (lower AIC for two species) and can yield substantially different occupancy estimates than those that do not. When the potential for misidentification exists, researchers should consider analytical techniques that can account for this source of error, such as those presented here. ?? 2011 by the Ecological Society of America..

Miller, D. A.; Nichols, J. D.; McClintock, B. T.; Grant, E. H. C.; Bailey, L. L.; Weir, L. A.

2011-01-01

348

Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review  

PubMed Central

Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients.

2013-01-01

349

Measurement error in estimates of sprint velocity from a laser displacement measurement device.  

PubMed

This study aimed to determine the measurement error associated with estimates of velocity from a laser-based device during different phases of a maximal athletic sprint. Laser-based displacement data were obtained from 10 sprinters completing a total of 89 sprints and were fitted with a fifth-order polynomial function which was differentiated to obtain instantaneous velocity data. These velocity estimates were compared against criterion high-speed video velocities at either 1, 5, 10, 30 or 50 m using a Bland-Altman analysis to assess bias and random error. Bias was highest at 1 m (+ 0.41 m/s) and tended to decrease as the measurement distance increased, with values less than + 0.10 m/s at 30 and 50 m. Random error was more consistent between distances, and reached a minimum value (±0.11 m/s) at 10 m. Laser devices offer a potentially useful time-efficient tool for assessing between-subject or between-session performance from the mid-acceleration and maximum velocity phases (i. e., at 10 m and beyond), although only differences exceeding 0.22-0.30 m/s should be considered genuine. However, laser data should not be used during the first 5 m of a sprint, and are likely of limited use for assessing within-subject variation in performance during a single session. PMID:22450882

Bezodis, N E; Salo, A I T; Trewartha, G

2012-03-26

350

Variance reduction for error estimation when classifying colon polyps from CT colonography  

NASA Astrophysics Data System (ADS)

For cancer polyp detection based on CT colonography we investigate the sample variance of two methods for estimating the sensitivity and specificity. The goal is the reduction of sample variance for both error estimates, as a first step towards comparison with other detection schemes. Our detection scheme is based on a committee of support vector machines. The two estimates of sensitivity and specificity studied here are a smoothed bootstrap (the 632+ estimator), and ten-fold cross-validation. It is shown that the 632+ estimator generally has lower sample variance than the usual cross-validation estimator. When the number of nonpolyps in the training set is relatively small we obtain approximately 80% sensitivity and 50% specificity (for either method). On the other hand, when the number of nonpolyps in the training set is relatively large, estimated sensitivity (for either method) drops considerably. Finally, we consider the intertwined roles of relative sample sizes (polyp/nonpolyp), misclassification costs, and bias-variance reduction.

Malley, James D.; Jerebko, Anna K.; Miller, Meghan T.; Summers, Ronald M.

2003-05-01

351

Stacked Weak Lensing Mass Calibration: Estimators, Systematics, and Impact on Cosmological Parameter Constraints  

NASA Astrophysics Data System (ADS)

When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to ?2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of ~3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.

Rozo, Eduardo; Wu, Hao-Yi; Schmidt, Fabian

2011-07-01

352

Improving prediction uncertainty estimation in urban hydrology with an error model accounting for bias  

NASA Astrophysics Data System (ADS)

Predictions of the urban hydrologic response are of paramount importance to foresee floodings and sewer overflows and hence support sensible decision making. Due to several error sources models results are uncertain. Modeling statistically these uncertainties we can estimate how reliable predictions are. Most hydological studies in urban areas (e.g. Freni and Mannina, 2010) assume that residuals E are independent and identically distributed. These hypotheses are usually strongly violated due to neglected deficits in model structure and error in input data that lead to strong autocorrelation. We propose a new methodology to i) estimating the total uncertainty and ii) quantifying different type of errors affecting model results, videlicet, parametric, structural, input data, and calibration data uncertainty. Thereby we can make more realistic assumptions about the residuals. We consider the residual process to be a sum of an autocorrelated error term B and a memory-less uncertainty term E. As proposed by Reichert and Schuwirth (2012), B, called model inadequacy or bias, is described by a normally-distributed autoregressive process and accounts for structural deficiencies and errors in input measurement. The observation error E, is, instead, normally and independently distributed. Since urban watersheds are extremely responsive to precipitation events we modified this framework, making the bias input-dependent and transforming model results and data for residual variance stabilization. To show the improvement in uncertainty quantification we analyzed the response of a monitored stormwater system. We modeled the outlet discharge for several rain events by using a conceptual model. For comparison we computed the uncertainties with the traditional independent error model (e.g. Freni and Mannina, 2010). The quality of the prediction uncertainty bands were analyzed through residual diagnostics for the calibration phase and prediction coverage in the validation phase. The results of this study clearly show that input-dependent autocorrelated error model outperforms the independent residual representation. This is evident when comparing the fulfillment of the distribution assumptions of E. The bias error model produces realization of E that are much smaller (and so more realistic), less autocorrelated and heteroskedastic than with the current model. Furthermore, the proportion of validation data falling into the 95% credibility intervals is circa 15% higher accounting for bias than under the independence assumption. Our framework describing model bias appeared very promising in improving the fulfillment of the statistical assumptions and in decomposing predictive uncertainty. We believe that the proposed error model will be suitable for many applications because the computational expenses are only negligibly increased compared to the traditional approach. In future we will show how to use this approach with complex hydrodynamic models to further separate the effect structural deficits and input uncertainty. References P. Reichert and N. Schuwirth. 2012. Linking statistical bias description to multiobjective model calibration. Water Resources Research, 48, W09543, doi:10.1029/2011WR011391. G. Freni and G. Mannina. 2010. Bayesian approach for uncertainty quantification in water quality modelling: the influence of prior distribution. Journal of Hydrology, 392, 31-39, doi:10.1016/j.jhydrol.2010.07.043

Del Giudice, Dario; Reichert, Peter; Honti, Mark; Scheidegger, Andreas; Albert, Carlo; Rieckermann, Jörg

2013-04-01

353

Simulations using patient data to evaluate systematic errors that may occur in 4D treatment planning: A proof of concept study.  

PubMed

Purpose: The purpose of this work is to present a framework to evaluate the accuracy of four-dimensional treatment planning in external beam radiation therapy using measured patient data and digital phantoms.Methods: To accomplish this, 4D digital phantoms of two model patients were created using measured patient lung tumor positions. These phantoms were used to simulate a four-dimensional computed tomography image set, which in turn was used to create a 4D Monte Carlo (4DMC) treatment plan. The 4DMC plan was evaluated by simulating the delivery of the treatment plan over approximately 5 min of tumor motion measured from the same patient on a different day. Unique phantoms accounting for the patient position (tumor position and thorax position) at 2 s intervals were used to represent the model patients on the day of treatment delivery and the delivered dose to the tumor was determined using Monte Carlo simulations.Results: For Patient 1, the tumor was adequately covered with 95.2% of the tumor receiving the prescribed dose. For Patient 2, the tumor was not adequately covered and only 74.3% of the tumor received the prescribed dose.Conclusions: This study presents a framework to evaluate 4D treatment planning methods and demonstrates a potential limitation of 4D treatment planning methods. When systematic errors are present, including when the imaging study used for treatment planning does not represent all potential tumor locations during therapy, the treatment planning methods may not adequately predict the dose to the tumor. This is the first example of a simulation study based on patient tumor trajectories where systematic errors that occur due to an inaccurate estimate of tumor motion are evaluated. PMID:24007139

James, Sara St; Seco, Joao; Mishra, Pankaj; Lewis, John H

2013-09-01

354

Product-error-driven generator of probable rainfall conditioned on WSR-88D precipitation estimates  

NASA Astrophysics Data System (ADS)

The existence of large errors in precipitation products delivered by the network of Weather Surveillance Radar, 1988 Doppler (WSR-88D) radars is broadly recognized. However, their quantitative characteristics remain poorly understood. Recently, the authors developed a functional-statistical model that quantifies the relation between radar rainfall and the corresponding true rainfall in a way that is applicable to the probabilistic quantitative precipitation estimation planned for future use by the U.S. National Weather Service. The model consists of a deterministic distortion function and a random uncertainty factor, both conditioned on given radar rainfall values. It also accounts for the spatiotemporal correlations in the random uncertainty factor. The model components were estimated on the basis of a 6-year-long data sample that considers the effects of seasons, range from radar, and time scales. In this study, the authors present two different applications of the aforementioned uncertainty model: (1) the estimation of rainfall probability maps and (2) the generation of radar rainfall ensembles. In the former, maps of the rainfall exceedance probability for any threshold are produced, given a radar rainfall map. We also present the analytical derivation of the exceedance probability maps at coarser spatial scales. In the latter, the users can generate ensembles of probable true rainfall fields that are consistent with the observed radar rainfall and its error structure. Simulation of the random component is based on the Cholesky decomposition method. Finally, the authors discuss possible uses of these applications in hydrology and hydroclimatology.

Villarini, Gabriele; Krajewski, Witold F.; Ciach, Grzegorz J.; Zimmerman, Dale L.

2009-01-01

355

Programming errors contribute to death from patient-controlled analgesia: case report and estimate of probability  

Microsoft Academic Search

Purpose  To identify the factors that threaten patient safety when using patient-controlled analgesia (PCA) and to obtain an evidence-based\\u000a estimate of the probability of death from user programming errors associated with PCA,\\u000a \\u000a \\u000a \\u000a Clinical features  A 19-yr-old woman underwent Cesarean section and delivered a healthy infant, Postoperatively morphine sulfate (2 mg bolus,\\u000a lockout interval of six minutes, four-hour limit of 30 mg) was

Kim J. Vicente; Karima Kada-Bekhaled; Gillian Hillel; Andrea Cassano; Beverley A. Orser

2003-01-01

356

Error in estimates of tissue material properties from shear wave dispersion ultrasound vibrometry.  

PubMed

Shear wave velocity measurements are used in elasticity imaging to find the shear elasticity and viscosity of tissue. A technique called shear wave dispersion ultrasound vibrometry (SDUV) has been introduced to use the dispersive nature of shear wave velocity to locally estimate the material properties of tissue. Shear waves are created using a multifrequency ultrasound radiation force, and the propagating shear waves are measured a few millimeters away from the excitation point. The shear wave velocity is measured using a repetitive pulse-echo method and Kalman filtering to find the phase of the harmonic shear wave at 2 different locations. A viscoelastic Voigt model and the shear wave velocity measurements at different frequencies are used to find the shear elasticity (mu(1)) and viscosity (mu(2)) of the tissue. The purpose of this paper is to report the accuracy of the SDUV method over a range of different values of mu(1) and mu(2). A motion detection model of a vibrating scattering medium was used to analyze measurement errors of vibration phase in a scattering medium. To assess the accuracy of the SDUV method, we modeled the effects of phase errors on estimates of shear wave velocity and material properties while varying parameters such as shear stiffness and viscosity, shear wave amplitude, the distance between shear wave measurements (delta r), signal-to-noise ratio (SNR) of the ultrasound pulse-echo method, and the frequency range of the measurements. We performed an experiment in a section of porcine muscle to evaluate variation of the aforementioned parameters on the estimated shear wave velocity and material property measurements and to validate the error prediction model. The model showed that errors in the shear wave velocity and material property estimates were minimized by maximizing shear wave amplitude, pulse-echo SNR, delta r, and the bandwidth used for shear wave measurements. The experimental model showed optimum performance could be obtained for delta r = 3 - 6 mm, SNR =35 dB, with a frequency range of 100 to 600 Hz, and with a shear wave amplitude on the order of a few microns down to 0.5 microm. The model provides a basis to explore different parameters related to implementation of the SDUV method. The experiment confirmed conclusions made by the model, and the results can be used for optimization of SDUV. PMID:19406703

Urban, Matthew W; Chen, Shigao; Greenleaf, James F

2009-04-01

357

Laboratory measurement error in external dose estimates and its effects on dose-response analyses of Hanford worker mortality data  

SciTech Connect

This report addresses laboratory measurement error in estimates of external doses obtained from personnel dosimeters, and investigates the effects of these errors on linear dose-response analyses of data from epidemiologic studies of nuclear workers. These errors have the distinguishing feature that they are independent across time and across workers. Although the calculations made for this report were based on Hanford data, the overall conclusions are likely to be relevant for other epidemiologic studies of workers exposed to external radiation.

Gilbert, E.S.; Fix, J.J.

1996-08-01

358

A systematic error in MST/ST radar wind measurement induced by a finite range volume effect: 1. Observational results  

NASA Astrophysics Data System (ADS)

Wind measurement by MST/ST radars may be accompanied by a systematic error due to a finite range volume effect which works when a thin turbulent layer is simultaneously located in several adjacent range volumes. The error occurs when the layer coincides with a cross section through the range volume which is not symmetric with respect to the center of the beam. The finite range volume effect appears as a false vertical shear of horizontal wind in a vertical scale of the order of a few hundred meters, even if the ambient wind field is uniform. The false wind shear sometimes exceeds 40 ms-1 km-1 in magnitude or the critical value to induce the Kelvin-Helmholtz instability. Also the effect leads to a false temporal variation of the wind measurement, although the wind field does not change at all. The false wind shear with a magnitude less than 40 ms-1 km-1 cannot be discriminated from a true one in the observed data. It seems hard to indicate directly that the finite range volume effect appears as theoretically conceived. Judging from wind velocity and echo intensity data obtained by the MU radar in Japan, this effect appears quite frequently in the atmosphere. The small vertical scale wind shear as well as the temporal variation found only at a specific range should be treated with great care except when the ambient wind field is weak, where the finite range volume effect is not so important.

Fukao, Shoichiro; Sato, Toru; May, Peter T.; Tsuda, Toshitaka; Kato, Susumu; Inaba, Motoyuki; Kimura, Iwane

1988-01-01

359

The Gaussian hare and the Laplacian tortoise: computability of squared-error versus absolute-error estimators  

Microsoft Academic Search

Since the time of Gauss, it has been generally accepted that\\u000a$\\\\ell_2$-methods of combining observations by minimizing sums of squared errors\\u000ahave significant computational advantages over earlier $\\\\ell_1$-methods based\\u000aon minimization of absolute errors advocated by Boscovich, Laplace and others.\\u000aHowever, $\\\\ell_1$-methods are known to have significant robustness advantages\\u000aover $\\\\ell_2$-methods in many applications, and related quantile regression\\u000amethods provide

Stephen Portnoy; Roger Koenker; Ronald A. Thisted; M. R. Osborne

1997-01-01

360

Estimation of the extrapolation error in the calibration of type S thermocouples  

NASA Astrophysics Data System (ADS)

Measurement results from the calibration performed at NIST of ten new type S thermocouples have been analyzed to estimate the extrapolation error. Thermocouples have been calibrated at the fixed points of Zn, Al, Ag and Au and calibration curves were calculated using different numbers of FPs. It was found for these thermocouples that the absolute value of the extrapolation error, evaluated by measurement at the Au freezing-point temperature, is at most 0.10 °C and 0.27 °C when the fixed-points of Zn, Al and Ag, or the fixed-points of Zn and Al, are respectively used to calculate the calibration curve. It is also shown that absolute value of the extrapolation error, evaluated by measurement at the Ag freezing-point temperature is at most 0.25 °C when the fixed-points of Zn and Al, are used to calculate the calibration curve. This study is oriented to help those labs that lack a direct mechanism to achieve a high temperature calibration. It supports, up to 1064 °C, the application of a similar procedure to that used by Burns and Scroger in NIST SP-250-35 for calibrating a new type S thermocouple. The uncertainty amounts a few tenths of a degree Celsius.

Giorgio, P.; Garrity, K. M.; Rebagliati, M. Jiménez; García Skabar, J.

2013-09-01

361

Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes  

SciTech Connect

We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON, FRANCE; Vassilevski, Yuri [RUSSIA

2009-01-01

362

The use of bias correcting error models for estimating effective unsaturated flow parameters  

NASA Astrophysics Data System (ADS)

One of the problems when modeling water fluxes in the unsaturated zone is to estimate the model parameters from observations. Due to heterogeneities of the soil, these parameters depend on length scale. Especially for flow models with large domain sizes it is often required to represent soil structure as simple as possible. This means that heterogeneous structures with strong effects on the flow behavior may become incorporated in larger homogeneous grids, requiring that a model is set up in such a way that the impact of the structure on averaged variables is still represented. When calibrating a flow model for the unsaturated zone it is therefore important that the resulting effective parameters are independent of where measurements are taken. The calibration can become problematic if observation volumes are small compared to the modeling scale. Many approaches to deal with these problems have been suggested, including upscaling theory and geostatistics. This study looks at the use of explicit error models to guide a Markov Chain Monte Carlo (MCMC) calibration process towards sets of effective parameters for an upscaled model with good predictive power of the boundary fluxes. To illustrate the problem of the model calibration, a virtual reality multi step outflow experiment is created using a strongly heterogeneous soil structure. An upscaled homogeneous model is then used to model the water flow in the column and spatially sparse measurements are used for the calibration. First it is shown how inconsistent a calibration can be if the measurements do not cover a representative volume of the structure. Second, three different external error models, that allows for a calibration that acknowledges soil structure by altering the likelihood functions, are implemented and tested. The three error models tested are all variable in space but constant in time and the difference between them is the amount of prior information about the soil structure. The results indicate that the use of an error model can increase the consistency of the resulting models, improving the predictive capability of the calibrated upscaled model, when evaluating the fluxes over the lower boundary. The different error models tested show differently good performances depending on which amount and what type of measurement error is being considered. The result could be useful when calibrating large scale models where only local data is available.

Erdal, D.; Neuweiler, I.; Huisman, J. A.

2012-04-01

363

A Formula for the Standard Error of Estimate of Deviation Quotients on Short Forms of Wechsler's Scales.  

ERIC Educational Resources Information Center

A formula is presented for the standard error of estimate of Deviation Quotients (DQs). The formula is shown to perform well when used with data on short forms of two of Wechsler's scales. (Author/JAC)

Silverstein, A. B.

1985-01-01

364

Efficient recovery-based error estimation for the smoothed finite element method for smooth and singular linear elasticity  

NASA Astrophysics Data System (ADS)

An error control technique aimed to assess the quality of smoothed finite element approximations is presented in this paper. Finite element techniques based on strain smoothing appeared in 2007 were shown to provide significant advantages compared to conventional finite element approximations. In particular, a widely cited strength of such methods is improved accuracy for the same computational cost. Yet, few attempts have been made to directly assess the quality of the results obtained during the simulation by evaluating an estimate of the discretization error. Here we propose a recovery type error estimator based on an enhanced recovery technique. The salient features of the recovery are: enforcement of local equilibrium and, for singular problems a "smooth + singular" decomposition of the recovered stress. We evaluate the proposed estimator on a number of test cases from linear elastic structural mechanics and obtain efficient error estimations whose effectivities, both at local and global levels, are improved compared to recovery procedures not implementing these features.

González-Estrada, Octavio A.; Natarajan, Sundararajan; Ródenas, Juan José; Nguyen-Xuan, Hung; Bordas, Stéphane P. A.

2013-07-01

365

Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage  

NASA Astrophysics Data System (ADS)

Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).

Lee, Y.; Keehm, Y.

2011-12-01

366

Quasi-a priori truncation error estimation and higher order extrapolation for non-linear partial differential equations  

NASA Astrophysics Data System (ADS)

In this paper, we show how to accurately estimate the local truncation error of partial differential equations in a quasi-a priori way. We approximate the spatial truncation error using the ?-estimation procedure, which aims to compare the discretisation on a sequence of grids with different spacing. While most of the works in the literature focused on an a posteriori estimation, the following work develops an estimator for non-converged solutions. First, we focus the analysis on one- and two-dimensional scalar non-linear test cases to examine the accuracy of the approach using a finite difference discretisation. Then, we extend the analysis to a two-dimensional vectorial problem: the Euler equations discretised using a finite volume vertex-based approach. Finally, we propose to analyse a direct application: ?-extrapolation based on non-converged ?-estimation. We demonstrate that a solution with an improved accuracy can be obtained from a non-a posteriori error estimation approach.

Fraysse, F.; Valero, E.; Rubio, G.

2013-11-01

367

Correction of systematic set-up error in breast and head and neck irradiation through a no-action level (NAL) protocol  

Microsoft Academic Search

Purpose  To quantify systematic and random patient set-up errors in breast and head and neck conventional irradiation and to evaluate\\u000a a no-action level (NAL) protocol for systematic set-up error off-line correction in head and neck cancer and breast cancer\\u000a patients.\\u000a \\u000a \\u000a \\u000a \\u000a Material and methods  Verification electronic portal images of orthogonal set-up fields were obtained daily for the initial four consecutive fractions\\u000a for 20

Eva M. Lozano; Luis A. Pérez; Javier Torres; Carmen Carrascosa; Miguel Sanz; Fermín Mendicote; Antonio Gil

2011-01-01

368

Online estimation of the target registration error for n-ocular optical tracking systems.  

PubMed

For current surgical navigation systems optical tracking is state of the art. The accuracy of these tracking systems is currently determined statically for the case of full visibility of all tracking targets. We propose a dynamic determination of the accuracy based on the visibility and geometry of the tracking setup. This real time estimation of accuracy has a multitude of applications. For multiple camera systems it allows reducing line of sight problems and guaranteeing a certain accuracy. The visualization of these accuracies allows surgeons to perform the procedures taking to the tracking accuracy into account. It also allows engineers to design tracking setups interactively guaranteeing a certain accuracy. Our model is an extension to the state of the art models of Fitzpatrick et al. and Hoff et al. We model the error in the camera sensor plane. The error is propagated using the internal camera parameter, camera poses, tracking target poses, target geometry and marker visibility, in order to estimate the final accuracy of the tracked instrument. PMID:18044624

Sielhorst, Tobias; Bauer, Martin; Wenisch, Oliver; Klinker, Gudrun; Navab, Nassir

2007-01-01

369

Finding systematic errors in tomographic data: Characterising ion-trap quantum computers  

NASA Astrophysics Data System (ADS)

Quantum state tomography has become a standard tool in quantum information processing to extract information about an unknown state. Several recipes exist to post-process the data and obtain a density matrix; for instance using maximum-likelihood estimation. These evaluations, and all conclusions taken from the density matrices, however, rely on valid data - meaning data that agrees both with the measurement model and a quantum model within statistical uncertainties. Given the wide span of possible discrepancies between laboratory and theory model, data ought to be tested for its validity prior to any subsequent evaluation. The presented talk will provide an overview of such tests which are easily implemented. These will then be applied onto tomographic data from an ion-trap quantum computer.

Monz, Thomas

2013-03-01

370

Kinematic GPS solutions for aircraft trajectories: Identifying and minimizing systematic height errors associated with atmospheric propagation delays  

USGS Publications Warehouse

When kinematic GPS processing software is used to estimate the trajectory of an aircraft, unless the delays imposed on the GPS signals by the atmosphere are either estimated or calibrated via external observations, then vertical height errors of decimeters can occur. This problem is clearly manifested when the aircraft is positioned against multiple base stations in areas of pronounced topography because the aircraft height solutions obtained using different base stations will tend to be mutually offset, or biased, in proportion to the elevation differences between the base stations. When performing kinematic surveys in areas with significant topography it should be standard procedure to use multiple base stations, and to separate them vertically to the maximum extent possible, since it will then be much easier to detect mis-modeling of the atmosphere. Copyright 2007 by the American Geophysical Union.

Shan, S.; Bevis, M.; Kendrick, E.; Mader, G. L.; Raleigh, D.; Hudnut, K.; Sartori, M.; Phillips, D.

2007-01-01

371

Cross versus Within-Company Cost Estimation Studies: A Systematic Review  

Microsoft Academic Search

OBJECTIVE - The objective of this paper is to determine under what circumstances individual organisations would be able to rely on cross-company-based estimation models. METHOD - We performed a systematic review of studies that compared predictions from cross-company models with predictions from within-company models based on analysis of project data. RESULTS - Ten papers compared cross- and within-company estimation models,

Barbara A. Kitchenham; Emilia Mendes; Guilherme Horta Travassos

2007-01-01

372

An Unbiased Estimator of the Variance of Simple Random Sampling Using Mixed Random-Systematic Sampling  

Microsoft Academic Search

Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the

Alberto Padilla

2009-01-01

373

Computerized estimation of patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy.  

PubMed

We have developed a computerized method for estimating patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy. The patient setup errors were estimated based on a template-matching technique that compared the portal image and a localized pelvic template image with a clinical target volume produced from a digitally reconstructed radiography (DRR) image of each patient. We evaluated the proposed method by calculating the residual error between the patient setup error obtained by the proposed method and the gold standard setup error determined by consensus between two radiation oncologists. Eleven training cases with prostate cancer were used for development of the proposed method, and then we applied the method to 10 test cases as a validation test. As a result, the residual errors in the anterior-posterior, superior-inferior and left-right directions were smaller than 2 mm for the validation test. The mean residual error was 2.65 ± 1.21 mm in the Euclidean distance for training cases, and 3.10 ± 1.49 mm for the validation test. There was no statistically significant difference in the residual error between the test for training cases and the validation test (P = 0.438). The proposed method appears to be robust for detecting patient setup error in the treatment of prostate cancer radiotherapy. PMID:22843375

Arimura, Hidetaka; Itano, Wataru; Shioyama, Yoshiyuki; Matsushita, Norimasa; Magome, Taiki; Yoshitake, Tadamasa; Anai, Shigeo; Nakamura, Katsumasa; Yoshidome, Satoshi; Yamagami, Akihiko; Honda, Hiroshi; Ohki, Masafumi; Toyofuku, Fukai; Hirata, Hideki

2012-07-26

374

Computerized estimation of patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy  

PubMed Central

We have developed a computerized method for estimating patient setup errors in portal images based on localized pelvic templates for prostate cancer radiotherapy. The patient setup errors were estimated based on a template-matching technique that compared the portal image and a localized pelvic template image with a clinical target volume produced from a digitally reconstructed radiography (DRR) image of each patient. We evaluated the proposed method by calculating the residual error between the patient setup error obtained by the proposed method and the gold standard setup error determined by consensus between two radiation oncologists. Eleven training cases with prostate cancer were used for development of the proposed method, and then we applied the method to 10 test cases as a validation test. As a result, the residual errors in the anterior–posterior, superior–inferior and left–right directions were smaller than 2 mm for the validation test. The mean residual error was 2.65 ± 1.21 mm in the Euclidean distance for training cases, and 3.10 ± 1.49 mm for the validation test. There was no statistically significant difference in the residual error between the test for training cases and the validation test (P = 0.438). The proposed method appears to be robust for detecting patient setup error in the treatment of prostate cancer radiotherapy.

Arimura, Hidetaka; Itano, Wataru; Shioyama, Yoshiyuki; Matsushita, Norimasa; Magome, Taiki; Yoshitake, Tadamasa; Anai, Shigeo; Nakamura, Katsumasa; Yoshidome, Satoshi; Yamagami, Akihiko; Honda, Hiroshi; Ohki, Masafumi; Toyofuku, Fukai; Hirata, Hideki

2012-01-01

375

Quantifying the sampling error in tree census measurements by volunteers and its effect on carbon stock estimates.  

PubMed

A typical way to quantify aboveground carbon in forests is to measure tree diameters and use species-specific allometric equations to estimate biomass and carbon stocks. Using "citizen scientists" to collect data that are usually time-consuming and labor-intensive can play a valuable role in ecological research. However, data validation, such as establishing the sampling error in volunteer measurements, is a crucial, but little studied, part of utilizing citizen science data. The aims of this study were to (1) evaluate the quality of tree diameter and height measurements carried out by volunteers compared to expert scientists and (2) estimate how sensitive carbon stock estimates are to these measurement sampling errors. Using all diameter data measured with a diameter tape, the volunteer mean sampling error (difference between repeated measurements of the same stem) was 9.9 mm, and the expert sampling error was 1.8 mm. Excluding those sampling errors > 1 cm, the mean sampling errors were 2.3 mm (volunteers) and 1.4 mm (experts) (this excluded 14% [volunteer] and 3% [expert] of the data). The sampling error in diameter measurements had a small effect on the biomass estimates of the plots: a volunteer (expert) diameter sampling error of 2.3 mm (1.4 mm) translated into 1.7% (0.9%) change in the biomass estimates calculated from species-specific allometric equations based upon diameter. Height sampling error had a dependent relationship with tree height. Including height measurements in biomass calculations compounded the sampling error markedly; the impact of volunteer sampling error on biomass estimates was +/- 15%, and the expert range was +/- 9%. Using dendrometer bands, used to measure growth rates, we calculated that the volunteer (vs. expert) sampling error was 0.6 mm (vs. 0.3 mm), which is equivalent to a difference in carbon storage of +/- 0.011 kg C/yr (vs. +/- 0.002 kg C/yr) per stem. Using a citizen science model for monitoring carbon stocks not only has benefits in educating and engaging the public in science, but as demonstrated here, can also provide accurate estimates of biomass or forest carbon stocks. PMID:23865241

Butt, Nathalie; Slade, Eleanor; Thompson, Jill; Malhi, Yadvinder; Riutta, Terhi

2013-06-01

376

Estimation of immunization providers' activities cost, medication cost, and immunization dose errors cost in Iraq.  

PubMed

The immunization status of children is improved by interventions that increase community demand for compulsory and non-compulsory vaccines, one of the most important interventions related to immunization providers. The aim of this study is to evaluate the activities of immunization providers in terms of activities time and cost, to calculate the immunization doses cost, and to determine the immunization dose errors cost. Time-motion and cost analysis study design was used. Five public health clinics in Mosul-Iraq participated in the study. Fifty (50) vaccine doses were required to estimate activities time and cost. Micro-costing method was used; time and cost data were collected for each immunization-related activity performed by the clinic staff. A stopwatch was used to measure the duration of activity interactions between the parents and clinic staff. The immunization service cost was calculated by multiplying the average salary/min by activity time per minute. 528 immunization cards of Iraqi children were scanned to determine the number and the cost of immunization doses errors (extraimmunization doses and invalid doses). The average time for child registration was 6.7 min per each immunization dose, and the physician spent more than 10 min per dose. Nurses needed more than 5 min to complete child vaccination. The total cost of immunization activities was 1.67 US$ per each immunization dose. Measles vaccine (fifth dose) has a lower price (0.42 US$) than all other immunization doses. The cost of a total of 288 invalid doses was 744.55 US$ and the cost of a total of 195 extra immunization doses was 503.85 US$. The time spent on physicians' activities was longer than that spent on registrars' and nurses' activities. Physician total cost was higher than registrar cost and nurse cost. The total immunization cost will increase by about 13.3% owing to dose errors. PMID:22521848

Al-lela, Omer Qutaiba B; Bahari, Mohd Baidi; Al-abbassi, Mustafa G; Salih, Muhannad R M; Basher, Amena Y

2012-04-19

377

Parameter Estimation for Differential Equation Models Using a Framework of Measurement Error in Regression Models  

PubMed Central

Differential equation (DE) models are widely used in many scientific fields that include engineering, physics and biomedical sciences. The so-called “forward problem”, the problem of simulations and predictions of state variables for given parameter values in the DE models, has been extensively studied by mathematicians, physicists, engineers and other scientists. However, the “inverse problem”, the problem of parameter estimation based on the measurements of output variables, has not been well explored using modern statistical methods, although some least squares-based approaches have been proposed and studied. In this paper, we propose parameter estimation methods for ordinary differential equation models (ODE) based on the local smoothing approach and a pseudo-least squares (PsLS) principle under a framework of measurement error in regression models. The asymptotic properties of the proposed PsLS estimator are established. We also compare the PsLS method to the corresponding SIMEX method and evaluate their finite sample performances via simulation studies. We illustrate the proposed approach using an application example from an HIV dynamic study.

Liang, Hua

2008-01-01

378

Correction of Systematic Time-Dependent Coda-Magnitude Errors in the Utah and Yellowstone National Park Region Earthquake Catalogs, 1981-2001  

NASA Astrophysics Data System (ADS)

We have calibrated new coda-magnitude (Mc) equations for local earthquakes digitally recorded since 1981 in the Utah (UT) region and since 1984 in the Yellowstone National Park (YP) region, where the University of Utah Seismograph Stations (UUSS) operates regional seismic networks. The primary motivation for this study was the recognition of systematic time-dependent Mc-ML differences ranging up to 0.4 and 1.0 units in the UT and YP regions, respectively. The new Mc equations are of the standard form Mc = a + b log ? +d? , where a, b, and d are constants, ? is epicentral distance, and ? is signal duration on a short-period vertical-component record, measured from the P-wave onset to the time that the signal drops below the noise level. The Mc equations were calibrated against local magnitudes (ML) determined from paper and synthetic Wood-Anderson records, using data from 480 UT and 204 YP earthquakes of ML 1 to 4. Improved signal duration measurements were made by (1) using a fixed noise level instead of the pre-event noise level, (2) applying instrument gain corrections using an experimentally-verified method, and (3) fixing a relatively minor coding error in UUSS software for automatically finding signal durations. To determine the constants in the Mc equations, we used an orthogonal regression method rather than linear regression. The latter produces biased results because the errors in the predictor variables (log ? and ? ) are not negligible compared to the errors in the response variable (ML). Compatibility between Mc and ML estimates in the UUSS catalogs is essential because MLs, while preferred, are consistently available only for earthquakes of ML > 4 and are unavailable for most earthquakes of M < 3. The new Mc equations, in combination with the corrections to the duration measurements, reduce average Mc-ML differences to less than 0.1 magnitude units for ML <= 4 events. The range of applicability of the new Mc equations is currently restricted to ML <= 4 because the finite record lengths of UUSS recording system triggers appear to cause underestimation of signal durations for larger earthquakes. The new Mc equations, and ML station corrections from a companion study, will be used to revise Mc and ML magnitudes in the UT and YP region earthquake catalogs for 1981-present and 1984-present, respectively. The revisions should significantly improve the homogeneity of these magnitudes, especially for M <= 4 events, allowing more accurate recurrence rate estimates and other statistical analyses.

Pechmann, J. C.; Bernier, J. C.; Nava, S. J.; Terra, F. M.; Arabasz, W. J.

2001-12-01

379

A comparative study of the correction of systematic errors in the quantitation of pyrethroids in vegetables using calibration curves prepared using standards in pure solvent  

Microsoft Academic Search

A comparative study of two mathematical approaches was performed in order to correct systematic errors due to the presence of the unexpected interferences which appear when the quantitation of the analyte in real samples is carried out with calibration curves built using standards in pure solvent. These methods consisted in the establishment of different mathematical expressions which transform the concentration

M. Martínez-Galera; T. López-López; M. D. Gil-García; J. L. Martínez-Vidal; D. Picón-Zamora; L. Cuadros-Rodríguez

2003-01-01

380

Using modulation transfer function for estimate measurement errors of the digital image correlation method  

NASA Astrophysics Data System (ADS)

The digital image correlation (DIC) method has been well recognized as a simple, accurate and efficient method for mechanical behavior evaluation. However, very few researches have concentrated on the relationship between the characteristics of the camera lens and the measurement error of the DIC method. The modulation transfer function (MTF) has commonly used for evaluation of the resolution capability of camera lens. In practice, when the DIC method is used, it is possible that the captured images become too blur to analyze when the object is out of the focus of the camera lens or the object deviates from the line-of-view of the camera. In this paper, the traditional MTF calibration specimen was replaced by a pre-arranged speckle pattern on the specimen. For DIC images grabbed from several selected locations both approaching and departing from the focus of the camera lens, corresponding MTF curves were obtained from the pre-arranged speckle pattern. The displacement measurement errors of the DIC method were then estimated by those obtained MTF curves.

Wang, Wei-Chung; Hwang, Chi Hung; Chen, Yung-Hsiang; Chuang, Tzu-Hung

2013-06-01

381

An ABC estimate of pedigree error rate: application in dog, sheep and cattle breeds.  

PubMed

On the basis of correlations between pairwise individual genealogical kinship coefficients and allele sharing distances computed from genotyping data, we propose an approximate Bayesian computation (ABC) approach to assess pedigree file reliability through gene-dropping simulations. We explore the features of the method using simulated data sets and show precision increases with the number of markers. An application is further made with five dog breeds, four sheep breeds and one cattle breed raised in France and displaying various characteristics and population sizes, using microsatellite or SNP markers. Depending on the breeds, pedigree error estimations range between 1% and 9% in dog breeds, 1% and 10% in sheep breeds and 4% in cattle breeds. PMID:22486502

Leroy, G; Danchin-Burge, C; Palhiere, I; Baumung, R; Fritz, S; Mériaux, J C; Gautier, M

2011-09-19

382

Estimation of the GMT inter-segment piston error from deformable mirror commands  

NASA Astrophysics Data System (ADS)

The Giant Magellan Telescope (GMT) will place seven primary mirror segments of 8.4 m diameter on a common mount to form a single co-phased aperture of 25 m. 1High order adaptive optics (AO) using an adaptive secondary mirror that is segmented in the same way as the primary will correct the telescope's imaging to the diffraction limit in the near infrared. 2Critical to the performance of the telescope will be real-time correction of atmospherically-induced optical path differences between the primary mirror segments. Measuring these errors is challenging because of the large gaps between the segments, where the aberrated wavefront is not explicitly measured by the AO sensors, which are approximately 30 cm even at their narrowest points. In this paper we show that it will be feasible to estimate the path differences between the segments from the commands sent to the adaptive secondary mirror while the AO is running in closed loop. These commands will be an approximate representation of the open-loop atmospheric wavefronts. We have investigated the value of the approach with real-time closed-loop deformable mirror command data from the first-light AO system now running on the Large Binocular Telescope (LBT). 3,4The data are of very high quality and realistically capture the spatio-temporal behavior of the wavefront. We use data from two nights to show that the GMT segment pathlength errors may be recovered to <25 nm accuracy with a simple linear estimator. Additional simulations show similar performance, which, with high-order AO, is quite adequate to maintain high Strehl ratio at near infrared wavelengths.

Hart, Michael; Briguglio, Runa; Pinna, Enrico; Puglisi, Alfio; Quiros, Fernando; Xompero, Marco

2011-09-01

383

Item Parameter Recovery, Standard Error Estimates, and Fit Statistics of the Winsteps Program for the Family of Rasch Models  

ERIC Educational Resources Information Center

|This study investigates item parameter recovery, standard error estimates, and fit statistics yielded by the WINSTEPS program under the Rasch model and the rating scale model through Monte Carlo simulations. The independent variables were item response model, test length, and sample size. WINSTEPS yielded practically unbiased estimates for the…

Wang, Wen-Chung; Chen, Cheng-Te

2005-01-01

384

Reducing satellite orbit error effects in near real-time GPS zenith tropospheric delay estimation for meteorology  

Microsoft Academic Search

We investigate the influence of using IGS predicted orbits for near real-time zenith tropospheric delay determination from GPS and implement a new processing strategy that allows the use of predicted orbits with minimal degradation of the ZTD estimates. Our strategy is based on the estimation of the three Keplerian parameters that represent the main error sources in predicted orbits (semi-major

Maorong Ge; Eric Calais; Jennifer Haase

2000-01-01

385

Speech enhancement based on minimum mean-square error short-time spectral estimation and its realization  

Microsoft Academic Search

The paper focuses on theory and realization of speech enhancement based on minimum mean square error short time spectral estimation. It achieves good results well when enhancing speech degraded by stationary additive white Gaussian noise on the condition that only the noisy speech is available. After presenting the ordinary formula of MMSE STSA estimator, its fatal defects in realization especially

Liu Zhibin; Xu Naiping

1997-01-01

386

An error estimation for the implicit Euler method recommended for use in the RELAP4 family of codes  

SciTech Connect

A simple estimation of the absolute value of the error in the dependence of the step number performed for the implicit (backward) Euler method has been derived for the case of a single ordinary differential equation (ODE). This estimation distinctly shows the way and the degree to which the implicit Euler method (recommended in user guides for the RELAP4 family of codes) can give more inaccurate results than the explicit (forward) method. The short and simple reasoning presented should be treated as an indication of the problem. Error estimation for a general system of ODEs is an extremely difficult and complex task, and it is still not completely solved.

Golos, S.

1989-02-01

387

Save now, pay later? Multi-period many-objective groundwater monitoring design given systematic model errors and uncertainty  

NASA Astrophysics Data System (ADS)

This study demonstrates how many-objective long-term groundwater monitoring (LTGM) network design tradeoffs evolve across multiple management periods given systematic models errors (i.e., predictive bias), groundwater flow-and-transport forecasting uncertainties, and contaminant observation uncertainties. Our analysis utilizes the Adaptive Strategies for Sampling in Space and Time (ASSIST) framework, which is composed of three primary components: (1) bias-aware Ensemble Kalman Filtering, (2) many-objective hierarchical Bayesian optimization, and (3) interactive visual analytics for understanding spatiotemporal network design tradeoffs. A physical aquifer experiment is utilized to develop a severely challenging multi-period observation system simulation experiment (OSSE) that reflects the challenges and decisions faced in monitoring contaminated groundwater systems. The experimental aquifer OSSE shows both the influence and consequences of plume dynamics as well as alternative cost-savings strategies in shaping how LTGM many-objective tradeoffs evolve. Our findings highlight the need to move beyond least cost purely statistical monitoring frameworks to consider many-objective evaluations of LTGM tradeoffs. The ASSIST framework provides a highly flexible approach for measuring the value of observables that simultaneously improves how the data are used to inform decisions.

Reed, P. M.; Kollat, J. B.

2012-01-01

388

Contrast evaluation of the polarimetric images of different targets in turbid medium: possible sources of systematic errors  

NASA Astrophysics Data System (ADS)

Subsurface polarimetric (differential polarization, degree of polarization or Mueller matrix) imaging of various targets in turbid media shows image contrast enhancement compared with total intensity measurements. The image contrast depends on the target immersion depth and on both target and background medium optical properties, such as scattering coefficient, absorption coefficient and anisotropy. The differential polarization image contrast is usually not the same for circularly and linearly polarized light. With linearly and circularly polarized light we acquired the orthogonal state contrast (OSC) images of reflecting, scattering and absorbing targets. The targets were positioned at various depths within the container filled with polystyrene particle suspension in water. We also performed numerical Monte Carlo modelling of backscattering Mueller matrix images of the experimental set-up. Quite often the dimensions of container, its shape and optical properties of container walls are not reported for similar experiments and numerical simulations. However, we found, that depending on the photon transport mean free path in the scattering medium, the above mentioned parameters, as well as multiple target design could all be sources of significant systematic errors in the evaluation of polarimetric image contrast. Thus, proper design of experiment geometry is of prime importance in order to remove the sources of possible artefacts in the image contrast evaluation and to make a correct choice between linear and circular polarization of the light for better target detection.

Novikova, T.; Bénière, A.; Goudail, F.; de Martino, A.

2010-04-01

389

GMM and 2SLS estimation of panel data models with spatially lagged dependent variables and spatially correlated error components  

NASA Astrophysics Data System (ADS)

In this article, the GMM based estimation of a typical family of spatial panel models with spatially lagged dependent variables and error components that are both spatially and time-wise correlated is addressed. We derive the best GMM (BGMM) estimator within certain class of optimal GMM estimators. We also discuss the asymptotic efficiency of BGMM estimator relative to the panel analogue of generalized spatial two stage least squares (GS2SLS) estimators and maximum likelihood (ML) estimators. We show that by including GS2SLS estimators as a special case, the BGMM estimator is generally more efficient than GS2SLS estimator and able to be as efficient as ML estimator under normality.

Zhang, Zhengyu; Bao, Shuming; Zhu, Pingfang

2007-07-01

390

Error estimate evaluation in numerical approximations of partial differential equations: A pilot study using data mining methods  

NASA Astrophysics Data System (ADS)

In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.

Assous, Franck; Chaskalovic, Joël

2013-03-01

391

A generalized method of moments estimator for a spatial model with moving average errors, with application to real estate prices  

Microsoft Academic Search

This paper proposes a new GMM estimator for spatial regression models with moving average errors. Monte Carlo results are\\u000a given which suggest that the GMM estimates are consistent and robust to non-normality, and the Bootstrap method is suggested\\u000a as a way of testing the significance of the moving average parameter. The estimator is applied in a model of English real

Bernard Fingleton

2008-01-01

392

A generalized method of moments estimator for a spatial model with moving average errors, with application to real estate prices  

Microsoft Academic Search

This paper proposes a new GMM estimator for spatial regression models with moving average errors. Monte Carlo results are\\u000a given which suggest that the GMM estimates are consistent and robust to non-normality, and the Bootstrap method is suggested\\u000a as a way of testing the significance of the moving average parameter. The estimator is applied in a model of English real

Bernard Fingleton

393

Bounding the error on bottom estimation for multi-angle swath bathymetry sonar  

NASA Astrophysics Data System (ADS)

With the recent introduction of multi-angle swath bathymetry (MASB) sonar to the commercial marketplace (e.g., Benthos Inc., C3D sonar, 2004), additions must be made to the current sonar lexicon. The correct interpretation of measurements made with MASB sonar, which uses filled transducer arrays to compute angle-of-arrival information (AOA) from backscattered signal, is essential not only for mapping, but for applications such as statistical bottom classification. In this paper it is shown that aside from uncorrelated channel to channel noise, there exists a tradeoff between effects that govern the error bounds on bottom estimation for surfaces having shallow grazing angle and surfaces distributed along a radial arc centered at the transducer. In the first case, as the bottom aligns with the radial direction to the receiver, footprint shift and shallow grazing angle effects dominate the uncertainty in physical bottom position (surface aligns along a single AOA). Alternatively, if signal from a radial arc arrives, a single AOA is usually estimated (not necessarily at the average location of the surface). Through theoretical treatment, simulation, and field measurements, the aforementioned factors affecting MASB bottom mapping are examined. [Work supported by NSERC.

Mullins, Geoff K.; Bird, John S.

2005-04-01

394

Assessment of floor response spectrum by parametric error estimation and its application to a spring-mounted reactor vessel assembly  

Microsoft Academic Search

For large facilities having several floors or containers, floor response spectra, FRS, other than ground response spectra\\u000a need to be developed. However, FRS can have error especially when components are not small in their masses. In this paper,\\u000a error is estimated in order to specify applicability of the FRS by deriving and comparing with analytic results for two degrees\\u000a of

Moon Shik Park

2007-01-01

395

Extent to which least-squares cross-validation minimises integrated square error in nonparametric density estimation  

Microsoft Academic Search

Let ho, ho and hc be the windows which minimise mean integrated square error, integrated square error and the least-squares cross-validatory criterion, respectively, for kernel density estimates. It is argued that ho, not ho, should be the benchmark for comparing different data-driven approaches to the determination of window size. Asymptotic properties of ho-ho and hc-ho, and of differences between integrated

Peter Hall; James Stephen Marron

1987-01-01

396

Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)  

USGS Publications Warehouse

The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

2009-01-01

397

A Systematic Review of Cross vs. Within Company Cost Estimation Studies  

Microsoft Academic Search

OBJECTIVE - The objective of this paper is to determine under what circumstances individual organisations would be able to rely on cross-company based estimation models. METHOD - We performed a systematic review of studies that compared predictions from cross- company models with predictions from within-company models based on analysis of project data. RESULTS - Ten papers compared cross-company and within-company

Barbara Kitchenham; Emilia Mendes; Guilherme H. Travassos

398

Outage Capacity of Spectrum Sharing Cognitive Radio with Channel Estimation Errors and Feedback Delay in Rayleigh Fading Environments  

NASA Astrophysics Data System (ADS)

This paper considers a spectrum sharing cognitive radio (CR) network consisting of one secondary user (SU) and one primary user (PU) in Rayleigh fading environments. The channel state information (CSI) between the secondary transmitter (STx) and the primary receiver (PRx) is assumed to be imperfect. Particularly, this CSI is assumed to be not only having channel estimation errors but also outdated due to feedback delay, which is different from existing work. We derive the closed-form expression for the outage capacity of the SU with this imperfect CSI under the average interference power constraint at the PU. Analytical results confirmed by simulations are presented to show the effect of the imperfect CSI. Particularly, it is shown that the outage capacity of the SU is robust to the channel estimation errors and feedback delay for low outage probability and high channel estimation errors and feedback delay.

Xu, D.; Feng, Z.; Zhang, P.

2013-04-01

399

Systematic review and meta-analysis of antimicrobial treatment effect estimation in complicated urinary tract infection.  

PubMed

Noninferiority trial design and analyses are commonly used to establish the effectiveness of a new antimicrobial drug for treatment of serious infections such as complicated urinary tract infection (cUTI). A systematic review and meta-analysis were conducted to estimate the treatment effects of three potential active comparator drugs for the design of a noninferiority trial. The systematic review identified no placebo trials of cUTI, four clinical trials of cUTI with uncomplicated urinary tract infection as a proxy for placebo, and nine trials with reports of treatment effect estimates for doripenem, levofloxacin, or imipenem-cilastatin. In the meta-analysis, the primary efficacy endpoint of interest was the microbiological eradication rate at the test-of-cure visit in the microbiological intent-to-treat population. The estimated eradication rates and corresponding 95% confidence intervals (CI) were 31.8% (26.5% to 37.2%) for placebo, 81% (77.7% to 84.2%) for doripenem, 79% (75.9% to 82.2%) for levofloxacin, and 80.5% (71.9% to 89.1%) for imipenem-cilastatin. The treatment effect estimates were 40.5% for doripenem, 38.7% for levofloxacin, 34.7% for imipenem-cilastatin, and 40.8% overall. These treatment effect estimates can be used to inform the design and analysis of future noninferiority trials in cUTI study populations. PMID:23939900

Singh, Krishan P; Li, Gang; Mitrani-Gold, Fanny S; Kurtinecz, Milena; Wetherington, Jeffrey; Tomayko, John F; Mundy, Linda M

2013-08-12

400

Error concealment for MPEG2 video decoders with enhanced coding mode estimation  

Microsoft Academic Search

We present a novel error concealment method for MPEG-2 video decoders. Imperfect transmission of block-based compressed images may result in loss of blocks, making image degradation inevitable. In the hybrid error concealment method, both spatial and temporal error concealment repair the damaged regions through adaptive interpolation in the respective domains. In order for the hybrid method to yield good performance

Myeong-Hoon Jo; Woo-Jin Song

2000-01-01

401

Error handling strategies in multiphase inverse modeling  

SciTech Connect

Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

Finsterle, S.; Zhang, Y.

2010-12-01

402

Systematic errors in the measurement of adsorption isotherms by frontal analysis. Impact of the choice of column hold-up volume, range and density of the data points  

SciTech Connect

Besides the accuracy and the precision of the measurements of the data points, several important parameters affect the accuracy of the adsorption isotherms that are derived from the data acquired by frontal analysis (FA). The influence of these parameters is discussed. First, the effects of the width of the concentration range within which the adsorption data are measured and of the distribution of the data points in this range are investigated. Systematic elimination of parts of the data points before the calculation of the nonlinear regression of the data to the model illustrates the importance of the numbers of data points (1) within the linear range and (2) at high concentrations. The influence of the inaccuracy of the estimate of the column hold-up volume on each adsorption data point, on the selection of the isotherm model, and on the best estimates of the adsorption isotherm parameters is also stressed. Depending on the method used to measure it, the hold-up time can vary by more than 10%. The high concentration part of the adsorption isotherm is particularly sensitive to errors made on t{sub 0,exp} and as a result, when the isotherm follows bi-Langmuir isotherm behavior, the equilibrium constant of the low-energy sites may change by a factor 2. This study shows that the agreement between calculated and experimental overloaded band profiles is a necessary condition to validate the choice of an adsorption model and the calculation of its numerical parameters but that this condition is not sufficient.

Gritti, Fabrice [University of Tennessee, Knoxville (UTK); Guiochon, Georges A [ORNL

2005-08-01

403

Application of asymptotic expansions for maximum likelihood estimators errors to gravitational waves from binary mergers: The single interferometer case  

SciTech Connect

In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.

Zanolin, M. [Embry-Riddle Aeronautical University, 3700 Willow Creek Road, Prescott, Arizona, 86301 (United States); Vitale, S. [Embry-Riddle Aeronautical University, 3700 Willow Creek Road, Prescott, Arizona, 86301 (United States); LPTMC-Universite Pierre-et-Marie-Curie, 4 Place Jussieu, 75005 Paris (France); Makris, N. [Massachusetts Institute of Technology, 77 Mass Ave, Cambridge, Massachusetts, 02139 (United States)

2010-06-15

404

Nuclear gene phylogeography using PHASE: dealing with unresolved genotypes, lost alleles, and systematic bias in parameter estimation  

PubMed Central

Background A widely-used approach for screening nuclear DNA markers is to obtain sequence data and use bioinformatic algorithms to estimate which two alleles are present in heterozygous individuals. It is common practice to omit unresolved genotypes from downstream analyses, but the implications of this have not been investigated. We evaluated the haplotype reconstruction method implemented by PHASE in the context of phylogeographic applications. Empirical sequence datasets from five non-coding nuclear loci with gametic phase ascribed by molecular approaches were coupled with simulated datasets to investigate three key issues: (1) haplotype reconstruction error rates and the nature of inference errors, (2) dataset features and genotypic configurations that drive haplotype reconstruction uncertainty, and (3) impacts of omitting unresolved genotypes on levels of observed phylogenetic diversity and the accuracy of downstream phylogeographic analyses. Results We found that PHASE usually had very low false-positives (i.e., a low rate of confidently inferring haplotype pairs that were incorrect). The majority of genotypes that could not be resolved with high confidence included an allele occurring only once in a dataset, and genotypic configurations involving two low-frequency alleles were disproportionately represented in the pool of unresolved genotypes. The standard practice of omitting unresolved genotypes from downstream analyses can lead to considerable reductions in overall phylogenetic diversity that is skewed towards the loss of alleles with larger-than-average pairwise sequence divergences, and in turn, this causes systematic bias in estimates of important population genetic parameters. Conclusions A combination of experimental and computational approaches for resolving phase of segregating sites in phylogeographic applications is essential. We outline practical approaches to mitigating potential impacts of computational haplotype reconstruction on phylogeographic inferences. With targeted application of laboratory procedures that enable unambiguous phase determination via physical isolation of alleles from diploid PCR products, relatively little investment of time and effort is needed to overcome the observed biases.

2010-01-01

405

An Evaluation of Positioning Error Estimated by the Mesoscale Non-Hydrostatic Model  

NASA Astrophysics Data System (ADS)

\\ \\ \\ \\ \\ \\ \\ We are now evaluating atmospheric parameters (equivalent zenith wet delayand linear horizontal delay gradients) derived from VLBI, GPS, and WVR by comparing with slant path delays obtained by ray-tracing through the non-hydrostatic numerical weather prediction model (NHM) with 5 km horizontal resolution. Our ultimate purpose is to establish a new method for reducing atmospheric effects on geodetic positioning. We first seek to establish the level of positioning error due to intense mesoscale phenomena such as the passing of cold fronts, heavy rainfall events, and severe storms. The NMH provides temperature, humidity and pressure values at the surface and at 38 height levels (which vary between several tens meters and about 35 km), for each node in a 5 km by 5 km grid that covers all of central Japan and surrounding ocean. We are performing ray tracing experiments for the entire grid at 16 epochs corresponding to successive operational runs of the NHM between 1200 UT 10/19/2000 and 1200 UT 10/20/2000. At each station-epoch we trace about 100 rays to each station with roughly uniform density (count per unit solid angle) on the upperhemisphere, so as to approximate a sampling geometry similar to both GPS and VLBI. We will present the horizontal and vertical displacement of the site position estimated by the delay residuals between the ray traced slant delays and the anisotropic mapping function.

Ryuichi, I.; Hiromu, S.; Bevis, M.

2002-12-01

406

An estimate of the errors of the IGRF/DGRF fields 1945-2000  

NASA Astrophysics Data System (ADS)

The IGRF coefficients inevitably differ from the true values. Estimates are made of the their uncertainties by comparing IGRF and DGRF models with ones produced later. For simplicity, the uncertainties are summarized in terms of the corresponding root-mean-square vector uncertainty of the field at the Earth's surface; these rms uncertainties vary from a few hundred to a few nanotesla. (It is assumed that the IGRF is meant to model the long-wavelength long-period field of internal origin, with no attempt to separate the long-wavelength fields of core and crustal origin; the models are meant for users interested in the field near and outside the Earth's surface, not for core-field theoreticians.) So far we have rounded the main-field coefficients to 1 nT; this contributes an rms vector error of about 10 nT. If we do in fact get a succession of vector magnetic field satellites then we should reconsider this rounding level. Similarly, for future DGRF models we would probably be justified in extending the truncation from n=10 to n=12. On the other hand, the rounding of the secular variation coefficients to 0.1 nT could give a false impression of accuracy.

Lowes, F. J.

2000-12-01

407

Earthquake relocations and location error estimates in the Puerto Rico Island  

NASA Astrophysics Data System (ADS)

We present preliminary results of relocating over 2474 events in the crust and upper mantle of Puerto Rico recorded by the Puerto Rico Seismic Network (PRSN) from 1986 to 2009. This seismic activity is the result of the interaction of the Caribbean and North American plates primarily along the Puerto Rico Trench. We start with the one-dimensional (1D) velocity model currently used in the PRSN daily operations and obtain our preferred model by using the VELEST program. We apply the COMPLOC earthquake location package that improves relative locations by computing the source-specific station terms from arrival time residuals of nearby events. This algorithm has been tested in southern California, but is also available for use in other regions. Our results show improvements compared to the PRSN catalog locations. We also compared results using different distance cutoffs to test the reliability of our velocity model. Relative location errors are estimated by perturbing our observations with Gaussian distributed random noises with the standard deviation appropriate for our data. Our next step is to modify the COMPLOC program so that deeper events can be robustly relocated. Waveform cross-correlation data will also be included to further refine relative event locations. This is an ongoing UM/UPRM collaborative study and our goal is to implement these techniques into routine network practice for real-time relocation at the PRSN.

Zhang, Q.; Lin, G.; López Venegas, A. M.; Huerfano, V. A.; Soto-Cordero, L.

2010-12-01

408

Potential errors in body composition as estimated by whole body scintillation counting  

SciTech Connect

Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts were highly correlated (r = 0.87, p < 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.

Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.; Sandstead, H.H.

1983-04-01

409

Potential errors in body composition as estimated by whole body scintillation counting  

SciTech Connect

Vigorous exercise has been reported to increase the apparent potassium content of athletes measured by whole body gamma ray scintillation counting of /sup 40/K. The possibility that this phenomenon is an artifact was evaluated in three cyclists and one nonathlete after exercise on the road (cyclists) or in a room with a source of radon and radon progeny (nonathlete). The apparent /sup 40/K content of the thighs of the athletes and whole body of the nonathlete increased after exercise. Counts were also increased in both windows detecting /sup 214/Bi, a progeny of radon. /sup 40/K and /sup 214/Bi counts were highly correlated (r . 0.87, p less than 0.001). The apparent increase in /sup 40/K was accounted for by an increase in counts associated with the 1.764 MeV gamma ray emissions from /sup 214/Bi. Thus a failure to correct for radon progeny would cause a significant error in the estimate of lean body mass by /sup 40/K counting.

Lykken, G.I.; Lukaski, H.C.; Bolonchuk, W.W.; Sandstead, H.H.

1983-04-01

410

Wrapper feature selection for small sample size data driven by complete error estimates.  

PubMed

This paper focuses on wrapper-based feature selection for a 1-nearest neighbor classifier. We consider in particular the case of a small sample size with a few hundred instances, which is common in biomedical applications. We propose a technique for calculating the complete bootstrap for a 1-nearest-neighbor classifier (i.e., averaging over all desired test/train partitions of the data). The complete bootstrap and the complete cross-validation error estimate with lower variance are applied as novel selection criteria and are compared with the standard bootstrap and cross-validation in combination with three optimization techniques - sequential forward selection (SFS), binary particle swarm optimization (BPSO) and simplified social impact theory based optimization (SSITO). The experimental comparison based on ten datasets draws the following conclusions: for all three search methods examined here, the complete criteria are a significantly better choice than standard 2-fold cross-validation, 10-fold cross-validation and bootstrap with 50 trials irrespective of the selected output number of iterations. All the complete criterion-based 1NN wrappers with SFS search performed better than the widely-used FILTER and SIMBA methods. We also demonstrate the benefits and properties of our approaches on an important and novel real-world application of automatic detection of the subthalamic nucleus. PMID:22472029

Macaš, Martin; Lhotská, Lenka; Bakstein, Eduard; Novák, Daniel; Wild, Ji?í; Sieger, Tomáš; Vostatek, Pavel; Jech, Robert

2012-04-01

411

Estimating gene gain and loss rates in the presence of error in genome assembly and annotation using CAFE 3.  

PubMed

Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters. PMID:23709260

Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W

2013-05-24

412

Impact of transport and modelling errors on the estimation of methane sources and sinks by inverse modelling  

NASA Astrophysics Data System (ADS)

Since the nineties, inverse modelling by assimilating atmospheric measurements into a chemical transport model (CTM) has been used to derive sources and sinks of atmospheric trace gases. More recently, the high global warming potential of methane (CH4) and unexplained variations of its atmospheric mixing ratio caught the attention of several research groups. Indeed, the diversity and the variability of methane sources induce high uncertainty on the present and the future evolution of CH4 budget. With the increase of available measurement data to constrain inversions (satellite data, high frequency surface and tall tower observations, FTIR spectrometry,...), the main limiting factor is about to become the representation of atmospheric transport in CTMs. Indeed, errors in transport modelling directly converts into flux changes when assuming perfect transport in atmospheric inversions. Hence, we propose an inter-model comparison in order to quantify the impact of transport and modelling errors on the CH4 fluxes estimated into a variational inversion framework. Several inversion experiments are conducted using the same set-up (prior emissions, measurement and prior errors, OH field, initial conditions) of the variational system PYVAR, developed at LSCE (Laboratoire des Sciences du Climat et de l'Environnement, France). Nine different models (ACTM, IFS, IMPACT, IMPACT1x1, MOZART, PCTM, TM5, TM51x1 and TOMCAT) used in TRANSCOM-CH4 experiment (Patra el al, 2011) provide synthetic measurements data at up to 280 surface sites to constrain the inversions performed using the PYVAR system. Only the CTM (and the meteorological drivers which drive them) used to create the pseudo-observations vary among inversions. Consequently, the comparisons of the nine inverted methane fluxes obtained for 2005 give a good order of magnitude of the impact of transport and modelling errors on the estimated fluxes with current and future networks. It is shown that transport and modelling errors lead to a discrepancy of 27 TgCH4 per year at global scale, representing 5% of the total methane emissions for 2005. At continental scale, transport and modelling errors have bigger impacts in proportion to the area of the regions, ranging from 36 TgCH4 in North America to 7 TgCH4 in Boreal Eurasian, with a percentage range from 23% to 48%. Thus, contribution of transport and modelling errors to the mismatch between measurements and simulated methane concentrations is large considering the present questions on the methane budget. Moreover, diagnostics of statistics errors included in our inversions have been computed. It shows that errors contained in measurement errors covariance matrix are under-estimated in current inversions, suggesting to include more properly transport and modelling errors in future inversions.

Locatelli, Robin; Bousquet, Philippe; Chevallier, Frédéric

2013-04-01

413

Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review  

PubMed Central

Self-selected walking speed is an important measure of ambulation ability used in various clinical gait experiments. Inertial sensors, i.e., accelerometers and gyroscopes, have been gradually introduced to estimate walking speed. This research area has attracted a lot of attention for the past two decades, and the trend is continuing due to the improvement of performance and decrease in cost of the miniature inertial sensors. With the intention of understanding the state of the art of current development in this area, a systematic review on the exiting methods was done in the following electronic engines/databases: PubMed, ISI Web of Knowledge, SportDiscus and IEEE Xplore. Sixteen journal articles and papers in proceedings focusing on inertial sensor based walking speed estimation were fully reviewed. The existing methods were categorized by sensor specification, sensor attachment location, experimental design, and walking speed estimation algorithm.

Yang, Shuozhi; Li, Qingguo

2012-01-01

414

Decoding the (23,12,7) Golay code using bit-error probability estimates  

Microsoft Academic Search

The (23,12,7) Golay code is a perfect linear error-correcting code that can correct all patterns of three or fewer errors in 23 bit positions. A simple BCH decoding algorithm, given in E. Berlekamp (1968), can decode the (23,12,7) Golay code provided there are no more than two errors. The shift-search algorithm, developed by Reed et a. (1990), sequentially inverts the

Gregory O. Dubney; Irving S. Reed

2005-01-01

415

Motion Controller Design for Contour-Following Tasks Based on Real-Time Contour Error Estimation  

Microsoft Academic Search

Reduction of contour error is an important issue in contour-following applications. One of the common approaches to this problem is to design a controller based on contour error information. However, for the free-form contour-following tasks, there is a lack of effective algorithms for calculating contour errors in real time. To deal with this problem, this paper proposes a real-time contour

Ming-Yang Cheng; Cheng-Chien Lee

2007-01-01

416

Output feedback direct adaptive neural network control for uncertain SISO nonlinear systems using a fuzzy estimator of the control error.  

PubMed

A direct adaptive control algorithm, based on neural networks (NN) is presented for a class of single input single output (SISO) nonlinear systems. The proposed controller is implemented without a priori knowledge of the nonlinear systems; and only the output of the system is considered available for measurement. Contrary to the approaches available in the literature, in the proposed controller, the updating signal used in the adaptive laws is an estimate of the control error, which is directly related to the NN weights instead of the tracking error. A fuzzy inference system (FIS) is introduced to get an estimate of the control error. Without any additional control term to the NN adaptive controller, all the signals involved in the closed loop are proven to be exponentially bounded and hence the stability of the system. Simulation results demonstrate the effectiveness of the proposed approach. PMID:23037773

Chemachema, Mohamed

2012-09-07

417

A Priori Error Estimates For Interior Penalty Versions Of The Local Discontinuous Galerkin Method Applied To Transport Equations  

Microsoft Academic Search

The local discontinuous Galerkin method has been developed recently by Cockburn and Shu for convection-dominated convection-diffusion equations. In this paper, we present interior penalty versions of this method applied to transport equations and derive a priori error estimates. We demonstrate convergence of order k in the L 1 (L 2 ) norm when polynomials of degree at least k are

Clint Dawson; Jennifer Proft

2000-01-01

418

A Generalizability Theory Approach toward Estimating Standard Errors of Cutscores Set Using the Bookmark Standard Setting Procedure.  

ERIC Educational Resources Information Center

|The Bookmark Standard Setting Procedure (Lewis, Mitzel, and Green, 1996) is an item-response-theory-based standard setting method that has been widely implemented by state testing programs. The primary purposes of this study were to: (1) estimate standard errors for cutscores that result from Bookmark standard settings under a generalizability…

Lee, Guemin; Lewis, Daniel M.

419

Estimating the effect of crop classification error on evapotranspiration derived from remote sensing in the lower Colorado River basin, USA  

Microsoft Academic Search

In the U.S. Bureau of Reclamation's Lower Colorado River Accounting System (LCRAS), crop classifications derived from remote sensing are used to calculate regional estimates of crop evapotranspiration for water monitoring and management activities on the lower Colorado River basin. The LCRAS accuracy assessment was designed to quantify the impact of crop classification error on annual total crop evapotranspiration (ETc), as

S. V. Stehman; Jeff A. Milliken

2007-01-01

420

Estimating Error in Using Ambient PM2.5Concentrations as Proxies for Personal Exposures: A Review  

EPA Science Inventory

Several methods have been used to account for measurement error inherent in using the ambient concentration of particulate matter < 2.5 µm (PM2.5, ug/m,3) as a proxy for personal exposure. Common features of such methods are their reliance on the estimated ...

421

Capon and Bartlett Beamforming: Threshold Effect in Direction-of- Arrival Estimation Error and On the Probability of Resolution.  

National Technical Information Service (NTIS)

Below a specific threshold signal-to-noise ratio (SNR), the mean squared error (MSE) perfor- mance of signal direction-of arrival (DOA) estimates derived from the Capon algorithm degrades swiftly. Prediction of this threshold SNR point is of practical sig...

C. D. Richmond

2005-01-01

422

Analytical Error Estimate for the Ocean and Land Uptake of CO2 Using delta(13)C Observations in the Atmosphere.  

National Technical Information Service (NTIS)

The quantity and quality of atmospheric data pertaining to the global carbon cycle have improved to an extent that more realistic error estimates can now be attempted for regional sources and sinks of CO2 derived from such data. We present here a detailed...

J. W. C. White M. Trolier P. Ciais P. P. Tans R. J. Francey

1995-01-01

423