Science.gov

Sample records for estimated systematic error

  1. Systematic Error Estimation for Chemical Reaction Energies.

    PubMed

    Simm, Gregor N; Reiher, Markus

    2016-06-14

    For a theoretical understanding of the reactivity of complex chemical systems, accurate relative energies between intermediates and transition states are required. Despite its popularity, density functional theory (DFT) often fails to provide sufficiently accurate data, especially for molecules containing transition metals. Due to the huge number of intermediates that need to be studied for all but the simplest chemical processes, DFT is, to date, the only method that is computationally feasible. Here, we present a Bayesian framework for DFT that allows for error estimation of calculated properties. Since the optimal choice of parameters in present-day density functionals is strongly system dependent, we advocate for a system-focused reparameterization. While, at first sight, this approach conflicts with the first-principles character of DFT that should make it, in principle, system independent, we deliberately introduce system dependence to be able to assign a stochastically meaningful error to the system-dependent parametrization, which makes it nonarbitrary. By reparameterizing a functional that was derived on a sound physical basis to a chemical system of interest, we obtain a functional that yields reliable confidence intervals for reaction energies. We demonstrate our approach on the example of catalytic nitrogen fixation.

  2. Optimal input design for aircraft instrumentation systematic error estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1991-01-01

    A new technique for designing optimal flight test inputs for accurate estimation of instrumentation systematic errors was developed and demonstrated. A simulation model of the F-18 High Angle of Attack Research Vehicle (HARV) aircraft was used to evaluate the effectiveness of the optimal input compared to input recorded during flight test. Instrumentation systematic error parameter estimates and their standard errors were compared. It was found that the optimal input design improved error parameter estimates and their accuracies for a fixed time input design. Pilot acceptability of the optimal input design was demonstrated using a six degree-of-freedom fixed base piloted simulation of the F-18 HARV. The technique described in this work provides a practical, optimal procedure for designing inputs for data compatibility experiments.

  3. Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Garcia, P.

    2016-04-01

    Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its

  4. Systematic Errors in Low-latency Gravitational Wave Parameter Estimation Impact Electromagnetic Follow-up Observations

    NASA Astrophysics Data System (ADS)

    Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky

    2016-03-01

    Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objects’ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5σ using simple-precession waveforms and in excess of 20σ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of ∼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.

  5. Systematic estimation of forecast and observation error covariances in four-dimensional data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, D. P.; Cohn, S. E.; Ghil, M.

    1985-01-01

    A two-part algorithm is presented for reliably computing weather forecast model and observational error covariances during data assimilation. Data errors arise from instrumental inaccuracies and sub-grid scale variability, whereas forecast errors occur because of modeling errors and the propagation of previous analysis errors. A Kalman filter is defined as the primary algorithm for estimating the forecast and analysis error convariance matrices. A second algorithm is described for quantifying the noise covariance matrices of any degree to obtain accurate values for the observational error covariances. Numerical results are provided from a linearized one-dimensional shallow-water model. The results cover observational noise covariances, initial instrumental errors and erroneous model values.

  6. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    DOE PAGES

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less

  7. GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology

    SciTech Connect

    Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe

    2015-05-11

    The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.

  8. Estimation of Systematic Errors for Deuteron Electric Dipole Moment Search at COSY

    NASA Astrophysics Data System (ADS)

    Chekmenev, Stanislav

    2016-02-01

    An experimental method which is aimed to find a permanent EDM of a charged particle was proposed by the JEDI (Jülich Electric Dipole moment Investigations) collaboration. EDMs can be observed by their influence on spin motion. The only possible way to perform a direct measurement is to use a storage ring. For this purpose, it was decided to carry out the first precursor experiment at the Cooler Synchrotron (COSY). Since the EDM of a particle violates CP invariance it is expected to be tiny, treatment of all various sources of systematic errors should be done with a great level of precision. One should clearly understand how misalignments of the magnets affects the beam and the spin motion. It is planned to use a RF Wien filter for the precusor experiment. In this paper the simulations of the systematic effects for the RF Wien filter device method will be discussed.

  9. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-06-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  10. Protecting weak measurements against systematic errors

    NASA Astrophysics Data System (ADS)

    Pang, Shengshi; Alonso, Jose Raul Gonzalez; Brun, Todd A.; Jordan, Andrew N.

    2016-07-01

    In this work, we consider the systematic error of quantum metrology by weak measurements under decoherence. We derive the systematic error of maximum likelihood estimation in general to the first-order approximation of a small deviation in the probability distribution and study the robustness of standard weak measurement and postselected weak measurements against systematic errors. We show that, with a large weak value, the systematic error of a postselected weak measurement when the probe undergoes decoherence can be significantly lower than that of a standard weak measurement. This indicates another advantage of weak-value amplification in improving the performance of parameter estimation. We illustrate the results by an exact numerical simulation of decoherence arising from a bosonic mode and compare it to the first-order analytical result we obtain.

  11. Estimating Bias Error Distributions

    NASA Technical Reports Server (NTRS)

    Liu, Tian-Shu; Finley, Tom D.

    2001-01-01

    This paper formulates the general methodology for estimating the bias error distribution of a device in a measuring domain from less accurate measurements when a minimal number of standard values (typically two values) are available. A new perspective is that the bias error distribution can be found as a solution of an intrinsic functional equation in a domain. Based on this theory, the scaling- and translation-based methods for determining the bias error distribution arc developed. These methods are virtually applicable to any device as long as the bias error distribution of the device can be sufficiently described by a power series (a polynomial) or a Fourier series in a domain. These methods have been validated through computational simulations and laboratory calibration experiments for a number of different devices.

  12. A statistical analysis of systematic errors in temperature and ram velocity estimates from satellite-borne retarding potential analyzers

    SciTech Connect

    Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R.

    2009-05-15

    The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.

  13. Systematic error analysis of rotating coil using computer simulation

    SciTech Connect

    Li, Wei-chuan; Coles, M.

    1993-04-01

    This report describes a study of the systematic and random measurement uncertainties of magnetic multipoles which are due to construction errors, rotational speed variation, and electronic noise in a digitally bucked tangential coil assembly with dipole bucking windings. The sensitivities of the systematic multipole uncertainty to construction errors are estimated analytically and using a computer simulation program.

  14. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  15. Measuring Systematic Error with Curve Fits

    ERIC Educational Resources Information Center

    Rupright, Mark E.

    2011-01-01

    Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…

  16. Treatment of systematic errors in land data assimilation systems

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of lan...

  17. Antenna pointing systematic error model derivations

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.; Lansing, F. L.; Riggs, R.

    1987-01-01

    The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.

  18. Systematic errors in long baseline oscillation experiments

    SciTech Connect

    Harris, Deborah A.; /Fermilab

    2006-02-01

    This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.

  19. Systematic errors in strong lens modeling

    NASA Astrophysics Data System (ADS)

    Johnson, Traci Lin; Sharon, Keren; Bayliss, Matthew B.

    2015-08-01

    The lensing community has made great strides in quantifying the statistical errors associated with strong lens modeling. However, we are just now beginning to understand the systematic errors. Quantifying these errors is pertinent to Frontier Fields science, as number counts and luminosity functions are highly sensitive to the value of the magnifications of background sources across the entire field of view. We are aware that models can be very different when modelers change their assumptions about the parameterization of the lensing potential (i.e., parametric vs. non-parametric models). However, models built while utilizing a single methodology can lead to inconsistent outcomes for different quantities, distributions, and qualities of redshift information regarding the multiple images used as constraints in the lens model. We investigate how varying the number of multiple image constraints and available redshift information of those constraints (ex., spectroscopic vs. photometric vs. no redshift) can influence the outputs of our parametric strong lens models, specifically, the mass distribution and magnifications of background sources. We make use of the simulated clusters by M. Meneghetti et al. and the first two Frontier Fields clusters, which have a high number of multiply imaged galaxies with spectroscopically-measured redshifts (or input redshifts, in the case of simulated clusters). This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies, and are more prone to these systematic errors.

  20. Control by model error estimation

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Skelton, R. E.

    1976-01-01

    Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).

  1. Reducing Systematic Error in Weak Lensing Cluster Surveys

    NASA Astrophysics Data System (ADS)

    Utsumi, Yousuke; Miyazaki, Satoshi; Geller, Margaret J.; Dell'Antonio, Ian P.; Oguri, Masamune; Kurtz, Michael J.; Hamana, Takashi; Fabricant, Daniel G.

    2014-05-01

    Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing κ-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ~3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ~2000 peaks based on our Subaru fields. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.

  2. The "diagonal effect": a systematic error in oblique antisaccades.

    PubMed

    Koehn, John D; Roy, Elizabeth; Barton, Jason J S

    2008-08-01

    Antisaccades are known to show greater variable error and also a systematic hypometria in their amplitude compared with visually guided prosaccades. In this study, we examined whether their accuracy in direction (as opposed to amplitude) also showed a systematic error. We had human subjects perform prosaccades and antisaccades to goals located at a variety of polar angles. In the first experiment, subjects made prosaccades or antisaccades to one of eight equidistant locations in each block, whereas in the second, they made saccades to one of two equidistant locations per block. In the third, they made antisaccades to one of two locations at different distances but with the same polar angle in each block. Regardless of block design, the results consistently showed a saccadic systematic error, in that oblique antisaccades (but not prosaccades) requiring unequal vertical and horizontal vector components were deviated toward the 45 degrees diagonal meridians. This finding could not be attributed to range effects in either Cartesian or polar coordinates. A perceptual origin of the diagonal effect is suggested by similar systematic errors in other studies of memory-guided manual reaching or perceptual estimation of direction, and may indicate a common spatial bias when there is uncertain information about spatial location.

  3. Error estimation for ORION baseline vector determination

    NASA Technical Reports Server (NTRS)

    Wu, S. C.

    1980-01-01

    Effects of error sources on Operational Radio Interferometry Observing Network (ORION) baseline vector determination are studied. Partial derivatives of delay observations with respect to each error source are formulated. Covariance analysis is performed to estimate the contribution of each error source to baseline vector error. System design parameters such as antenna sizes, system temperatures and provision for dual frequency operation are discussed.

  4. Medication Errors in the Southeast Asian Countries: A Systematic Review

    PubMed Central

    Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui

    2015-01-01

    Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679

  5. Adjoint Error Estimation for Linear Advection

    SciTech Connect

    Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S

    2011-03-30

    An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.

  6. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  7. More on Systematic Error in a Boyle's Law Experiment

    NASA Astrophysics Data System (ADS)

    McCall, Richard P.

    2012-01-01

    A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7

  8. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  9. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  10. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  11. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  12. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  13. Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data

    PubMed Central

    Hahn, Seungsoo; Kim, Dongsup

    2015-01-01

    Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152

  14. Improved Systematic Pointing Error Model for the DSN Antennas

    NASA Technical Reports Server (NTRS)

    Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.

    2011-01-01

    New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.

  15. A Systematic Error Correction Method for TOVS Radiances

    NASA Technical Reports Server (NTRS)

    Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.

  16. Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2008-06-01

    The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors.

  17. Strategies for minimizing the impact of systematic errors on land data assimilation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Data assimilation concerns itself primarily with the impact of random stochastic errors on state estimation. However, the developers of land data assimilation systems are commonly faced with systematic errors arising from both the parameterization of a land surface model and the need to pre-process ...

  18. Note on apparent systematic and periodic errors in Geosat orbits

    NASA Technical Reports Server (NTRS)

    Sirkes, Ziv; Wunsch, Carl

    1990-01-01

    Apparent errors in Geosat orbits are estimated directly from the measurements. There are technical difficulties in such estimates from quasi-periodically gapped data. The dominant orbit errors display a line spectrum, in which the once/orbit error peak is split in a complex way into a series of narrow lines, with other errors being present as well. The spatial pattern of the errors is not random, displaying differences between mean ascending and descending orbits which are coherent over thousands of kilometers. Orbit errors do not decorrelate within a few orbit periods.

  19. Reducing Measurement Error in Student Achievement Estimation

    ERIC Educational Resources Information Center

    Battauz, Michela; Bellio, Ruggero; Gori, Enrico

    2008-01-01

    The achievement level is a variable measured with error, that can be estimated by means of the Rasch model. Teacher grades also measure the achievement level but they are expressed on a different scale. This paper proposes a method for combining these two scores to obtain a synthetic measure of the achievement level based on the theory developed…

  20. Tackling systematic errors in quantum logic gates with composite rotations

    SciTech Connect

    Cummins, Holly K.; Llewellyn, Gavin; Jones, Jonathan A.

    2003-04-01

    We describe the use of composite rotations to combat systematic errors in single-qubit quantum logic gates and discuss three families of composite rotations which can be used to correct off-resonance and pulse length errors. Although developed and described within the context of nuclear magnetic resonance quantum computing, these sequences should be applicable to any implementation of quantum computation.

  1. Systematic Parameter Errors in Inspiraling Neutron Star Binaries

    NASA Astrophysics Data System (ADS)

    Favata, Marc

    2014-03-01

    The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled.

  2. Newborn screening for inborn errors of metabolism: a systematic review.

    PubMed

    Seymour, C A; Thomason, M J; Chalmers, R A; Addison, G M; Bain, M D; Cockburn, F; Littlejohns, P; Lord, J; Wilcox, A H

    1997-01-01

    OBJECTIVES. To establish a database of literature and other evidence on neonatal screening programmes and technologies for inborn errors of metabolism. To undertake a systematic review of the data as a basis for evaluation of newborn screening for inborn errors of metabolism. To prepare an objective summary of the evidence on the appropriateness and need for various existing and possible neonatal screening programmes for inborn errors of metabolism in relation to the natural history of these diseases. To identify gaps in existing knowledge and make recommendations for required primary research. To make recommendations for the future development and organisation of neonatal screening for inborn errors of metabolism in the UK. HOW THE RESEARCH WAS CONDUCTED. There were three parts to the research. A systematic review of the literature on inborn errors of metabolism, neonatal screening programmes, new technologies for screening and economic factors. Inclusion and exclusion criteria were applied, and a working database of relevant papers was established. All selected papers were read by two or three experts and were critically appraised using a standard format. Seven criteria for a screening programme, based on the principles formulated by Wilson and Jungner (WHO, 1968), were used to summarise the evidence. These were as follows. Clinically and biochemically well-defined disorder. Known incidence in populations relevant to the UK. Disorder associated with significant morbidity or mortality. Effective treatment available. Period before onset during which intervention improves outcome. Ethical, safe, simple and robust screening test. Cost-effectiveness of screening. A questionnaire which was sent to all newborn screening laboratories in the UK. Site visits to assess new methodologies for newborn screening. The classical definition of an inborn error of metabolism was used (i.e., a monogenic disease resulting in deficient activity in a single enzyme in a pathway of

  3. Jason-2 systematic error analysis in the GPS derived orbits

    NASA Astrophysics Data System (ADS)

    Melachroinos, S.; Lemoine, F. G.; Zelensky, N. P.; Rowlands, D. D.; Luthcke, S. B.; Chinn, D. S.

    2011-12-01

    Several results related to global or regional sea level changes still too often rely on the assumption that orbit errors coming from station coordinates adoption can be neglected in the total error budget (Ceri et al. 2010). In particular Instantaneous crust-fixed coordinates are obtained by adding to the linear ITRF model the geophysical high-frequency variations. In principle, geocenter motion should also be included in this computation, in order to reference these coordinates to the center of mass of the whole Earth. This correction is currently not applied when computing GDR orbits. Cerri et al. (2010) performed an analysis of systematic errors common to all coordinates along the North/South direction, as this type of bias, also known as Z-shift, has a clear impact on MSL estimates due to the unequal distribution of continental surface in the northern and southern hemispheres. The goal of this paper is to specifically study the main source of errors which comes from the current imprecision in the Z-axis realization of the frame. We focus here on the time variability of this Z-shift, which we can decompose in a drift and a periodic component due to the presumably omitted geocenter motion. A series of Jason-2 GPS-only orbits have been computed at NASA GSFC, using both IGS05 and IGS08. These orbits have been shown to agree radially at less than 1 cm RMS vs our SLR/DORIS std0905 and std1007 reduced-dynamic orbits and in comparison with orbits produced by other analysis centers (Melachroinos et al. 2011). Our GPS-only JASON-2 orbit accuracy is assessed using a number of tests including analysis of independent SLR and altimeter crossover residuals, orbit overlap differences, and direct comparison to orbits generated at GSFC using SLR and DORIS tracking, and to orbits generated externally at other centers. Tests based on SLR-crossover residuals provide the best performance indicator for independent validation of the NASA/GSFC GPS-only reduced dynamic orbits. Reduced

  4. Treatment of systematic errors in land data assimilation systems

    NASA Astrophysics Data System (ADS)

    Crow, W. T.; Yilmaz, M.

    2012-12-01

    Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of land surface states. Such differences are commonly resolved prior to data assimilation through implementation of a pre-processing rescaling step whereby observations are scaled (or non-linearly transformed) to somehow "match" comparable predictions made by an assimilation model. While the rationale for removing systematic differences in means (i.e., bias) between models and observations is well-established, relatively little theoretical guidance is currently available to determine the appropriate treatment of higher-order moments during rescaling. This talk presents a simple analytical argument to define an optimal linear-rescaling strategy for observations prior to their assimilation into a land surface model. While a technique based on triple collocation theory is shown to replicate this optimal strategy, commonly-applied rescaling techniques (e.g., so called "least-squares regression" and "variance matching" approaches) are shown to represent only sub-optimal approximations to it. Since the triple collocation approach is likely infeasible in many real-world circumstances, general advice for deciding between various feasible (yet sub-optimal) rescaling approaches will be presented with an emphasis of the implications of this work for the case of directly assimilating satellite radiances. While the bulk of the analysis will deal with linear rescaling techniques, its extension to nonlinear cases will also be discussed.

  5. MAXIMUM LIKELIHOOD ANALYSIS OF SYSTEMATIC ERRORS IN INTERFEROMETRIC OBSERVATIONS OF THE COSMIC MICROWAVE BACKGROUND

    SciTech Connect

    Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.

    2013-06-01

    We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.

  6. Accuracy of image-plane holographic tomography with filtered backprojection: random and systematic errors.

    PubMed

    Belashov, A V; Petrov, N V; Semenova, I V

    2016-01-01

    This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results. PMID:26835625

  7. Ultraspectral Sounding Retrieval Error Budget and Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping

    2011-01-01

    The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..

  8. Bayesian conformity assessment in presence of systematic measurement errors

    NASA Astrophysics Data System (ADS)

    Carobbi, Carlo; Pennecchi, Francesca

    2016-04-01

    Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.

  9. Factoring Algebraic Error for Relative Pose Estimation

    SciTech Connect

    Lindstrom, P; Duchaineau, M

    2009-03-09

    We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.

  10. Error estimation and structural shape optimization

    NASA Astrophysics Data System (ADS)

    Song, Xiaoguang

    This work is concerned with three topics: error estimation, data smoothing process and the structural shape optimization design and analysis. In particular, the superconvergent stress recovery technique, the dual kriging B-spline curve and surface fittings, the development and the implementation of a novel node-based numerical shape optimization package are addressed. Concept and new technique of accurate stress recovery are developed and applied in finding the lateral buckling parameters of plate structures. Some useful conclusions are made for the finite element Reissner-Mindlin plate solutions. The powerful dual kriging B-spline fitting technique is reviewed and a set of new compact formulations are developed. This data smoothing method is then applied in accurately recovering curves and surfaces. The new node-based shape optimization method is based on the consideration that the critical stress and displacement constraints are generally located along or near the structural boundary. The method puts the maximum weights on the selected boundary nodes, referred to as the design points, so that the time-consuming sensitivity analysis is related to the perturbation of only these nodes. The method also allows large shape changes to achieve the optimal shape. The design variables are specified as the moving magnitudes for the prescribed design points that are always located at the structural boundary. Theories, implementations and applications are presented for various modules by which the package is constructed. Especially, techniques involving finite element error estimation, adaptive mesh generation, design sensitivity analysis, and data smoothing are emphasized.

  11. The Effect of Systematic Error in Forced Oscillation Testing

    NASA Technical Reports Server (NTRS)

    Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.

    2012-01-01

    One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.

  12. Investigation of systematic CD distribution error on intrafield

    NASA Astrophysics Data System (ADS)

    Kim, Keunjun; Kim, Daewoo; Kang, Junghyun; Jeong, Inseok; Lee, Sungkoo; Kim, Hyeongsoo

    2016-03-01

    As feature size shrinks, better critical dimension uniformity (CDU) is highly demanded in aspects of device characteristics. Intra field CDU is one of main contributor in total CD variation budget. Especially systematic CD distribution in shot, bank and MAT boundary should be strongly considered to minimize repeated error to guarantee high yield even though it is not prominent in overall CDU value. In this paper, we investigated the several factors to affect systematic CD distribution error on intra field. First of all, localized mask CD variation caused by electron-beam scattering over local region, development loading and etch loading effect directly printed in wafer. Appropriate mask fabrication suppress CD variation at boundary region. Secondly, chemical flare effect is expected to make CD gradient at boundary region. Photo acid concentration change by sub-resolution assist feature (SRAF) can reduce the CD gradient. We demonstrated SRAF size dependency in positive tone develop (PTD) and negative tone develop (NTD) case. Thirdly, out-of-field stray light (OOFSL) due to adjacent exposed field causes CD gradient at field boundary. Exposure dose reduction is expected as a solution in this case. Even though we perfectly control CDU at boundary region after mask patterning, other process issues such as etch and CMP loading effect also make worse the CD distribution at boundary region. Through the consideration of above factors, we optimized systematic CD distribution error at boundary region before etch. Furthermore we compared several techniques to compensate post-etch systematic CD distribution.

  13. Target parameter and error estimation using magnetometry

    NASA Astrophysics Data System (ADS)

    Norton, S. J.; Witten, A. J.; Won, I. J.; Taylor, D.

    The problem of locating and identifying buried unexploded ordnance from magnetometry measurements is addressed within the context of maximum likelihood estimation. In this approach, the magnetostatic theory is used to develop data templates, which represent the modeled magnetic response of a buried ferrous object of arbitrary location, iron content, size, shape, and orientation. It is assumed that these objects are characterized both by a magnetic susceptibility representing their passive response to the earth's magnetic field and by a three-dimensional magnetization vector representing a permanent dipole magnetization. Analytical models were derived for four types of targets: spheres, spherical shells, ellipsoids, and ellipsoidal shells. The models can be used to quantify the Cramer-Rao (error) bounds on the parameter estimates. These bounds give the minimum variance in the estimated parameters as a function of measurement signal-to-noise ratio, spatial sampling, and target characteristics. For cases where analytic expressions for the Cramer-Rao bounds can be derived, these expressions prove quite useful in establishing optimal sampling strategies. Analytic expressions for various Cramer-Rao bounds have been developed for spherical- and spherical shell-type objects. An maximum likelihood estimation algorithm has been developed and tested on data acquired at the Magnetic Test Range at the Naval Explosive Ordnance Disposal Tech Center in Indian Head, Maryland. This algorithm estimates seven target parameters. These parameters are the three Cartesian coordinates (x, y, z) identifying the buried ordnance's location, the three Cartesian components of the permanent dipole magnetization vector, and the equivalent radius of the ordnance assuming it is a passive solid iron sphere.

  14. Rigorous Error Estimates for Reynolds' Lubrication Approximation

    NASA Astrophysics Data System (ADS)

    Wilkening, Jon

    2006-11-01

    Reynolds' lubrication equation is used extensively in engineering calculations to study flows between moving machine parts, e.g. in journal bearings or computer disk drives. It is also used extensively in micro- and bio-fluid mechanics to model creeping flows through narrow channels and in thin films. To date, the only rigorous justification of this equation (due to Bayada and Chambat in 1986 and to Nazarov in 1987) states that the solution of the Navier-Stokes equations converges to the solution of Reynolds' equation in the limit as the aspect ratio ɛ approaches zero. In this talk, I will show how the constants in these error bounds depend on the geometry. More specifically, I will show how to compute expansion solutions of the Stokes equations in a 2-d periodic geometry to arbitrary order and exhibit error estimates with constants which are either (1) given in the problem statement or easily computable from h(x), or (2) difficult to compute but universal (independent of h(x)). Studying the constants in the latter category, we find that the effective radius of convergence actually increases through 10th order, but then begins to decrease as the inverse of the order, indicating that the expansion solution is probably an asymptotic series rather than a convergent series.

  15. SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION

    SciTech Connect

    Lee, Khee-Gan

    2012-07-10

    Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.

  16. Spatial reasoning in the treatment of systematic sensor errors

    SciTech Connect

    Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.

    1988-01-01

    In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.

  17. Model error estimation and correction by solving a inverse problem

    NASA Astrophysics Data System (ADS)

    Xue, Haile

    2016-04-01

    Nowadays, the weather forecasts and climate predictions are increasingly relied on numerical models. Yet, errors inevitably exist in model due to the imperfect numeric and parameterizations. From the practical point of view, model correction is an efficient strategy. Despite of the different complexity of forecast error correction algorithms, the general idea is to estimate the forecast errors by considering the NWP as a direct problem. Chou (1974) suggested an alternative view by considering the NWP as an inverse problem. The model error tendency term (ME) due to the model deficiency is assumed as an unknown term in NWP model, which can be discretized into short intervals (for example 6 hour) and considered as a constant or linear form in each interval. Given the past re-analyses and NWP model, the discretized MEs in the past intervals can be solved iteratively as a constant or linear-increased tendency term in each interval. These MEs can be further used as the online corrections. In this study, an iterative method for obtaining the MEs in past intervals was presented, and its convergence had been confirmed with sets of experiments in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August (JA) 2009 and January-February (JF) 2010. Then these MEs were used to get online model corretions based of systematic errors of GRAPES-GFS for July 2009 and January 2010. The data sets associated with initial condition and sea surface temperature (SST) used in this study are both based on NCEP final (FNL) data. According to the iterative numerical experiments, the following key conclusions can be drawn:(1) Batches of iteration test results indicated that the hour 6 forecast errors were reduced to 10% of their original value after 20 steps of iteration.(2) By offlinely comparing the error corrections estimated by MEs to the mean forecast errors, the patterns of estimated errors were considered to agree well with those

  18. Quantifying Error in the CMORPH Satellite Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yoo, S.; Xie, P.

    2010-12-01

    As part of the collaboration between China Meteorological Administration (CMA) National Meteorological Information Centre (NMIC) and NOAA Climate Prediction Center (CPC), a new system is being developed to construct hourly precipitation analysis on a 0.25olat/lon grid over China by merging information derived from gauge observations and CMORPH satellite precipitation estimates. Foundation to the development of the gauge-satellite merging algorithm is the definition of the systematic and random error inherent in the CMORPH satellite precipitation estimates. In this study, we quantify the CMORPH error structures through comparisons against a gauge-based analysis of hourly precipitation derived from station reports from a dense network over China. First, systematic error (bias) of the CMORPH satellite estimates are examined with co-located hourly gauge precipitation analysis over 0.25olat/lon grid boxes with at least one reporting station. The CMORPH exhibits biases of regional variations showing over-estimates over eastern China, and seasonal changes with over-/under-estimates during warm/cold seasons. The CMORPH bias presents range-dependency. In general, the CMORPH tends to over-/under-estimate weak / strong rainfall. The bias, when expressed in the form of ratio between the gauge observations and the CMORPH satellite estimates, increases with the rainfall intensity but tends to saturate at a certain level for high rainfall. Based on the above results, a prototype algorithm is developed to remove the CMORPH bias through matching the PDF of original CMORPH estimates against that of the gauge analysis using data pairs co-located over grid boxes with at least one reporting gauge over a 30-day period ending at the target date. The spatial domain for collecting the co-located data pairs is expanded so that at least 5000 pairs of data are available to ensure statistical availability. The bias-corrected CMORPH is then compared against the gauge data to quantify the

  19. A hybrid variational-ensemble data assimilation scheme with systematic error correction for limited-area ocean models

    NASA Astrophysics Data System (ADS)

    Oddo, Paolo; Storto, Andrea; Dobricic, Srdjan; Russo, Aniello; Lewis, Craig; Onken, Reiner; Coelho, Emanuel

    2016-10-01

    A hybrid variational-ensemble data assimilation scheme to estimate the vertical and horizontal parts of the background error covariance matrix for an ocean variational data assimilation system is presented and tested in a limited-area ocean model implemented in the western Mediterranean Sea. An extensive data set collected during the Recognized Environmental Picture Experiments conducted in June 2014 by the Centre for Maritime Research and Experimentation has been used for assimilation and validation. The hybrid scheme is used to both correct the systematic error introduced in the system from the external forcing (initialisation, lateral and surface open boundary conditions) and model parameterisation, and improve the representation of small-scale errors in the background error covariance matrix. An ensemble system is run offline for further use in the hybrid scheme, generated through perturbation of assimilated observations. Results of four different experiments have been compared. The reference experiment uses the classical stationary formulation of the background error covariance matrix and has no systematic error correction. The other three experiments account for, or not, systematic error correction and hybrid background error covariance matrix combining the static and the ensemble-derived errors of the day. Results show that the hybrid scheme when used in conjunction with the systematic error correction reduces the mean absolute error of temperature and salinity misfit by 55 and 42 % respectively, versus statistics arising from standard climatological covariances without systematic error correction.

  20. Systematic errors in cosmic microwave background polarization measurements

    NASA Astrophysics Data System (ADS)

    O'Dea, Daniel; Challinor, Anthony; Johnson, Bradley R.

    2007-04-01

    We investigate the impact of instrumental systematic errors on the potential of cosmic microwave background polarization experiments targeting primordial B-modes. To do so, we introduce spin-weighted Müller matrix-valued fields describing the linear response of the imperfect optical system and receiver, and give a careful discussion of the behaviour of the induced systematic effects under rotation of the instrument. We give the correspondence between the matrix components and known optical and receiver imperfections, and compare the likely performance of pseudo-correlation receivers and those that modulate the polarization with a half-wave plate. The latter is shown to have the significant advantage of not coupling the total intensity into polarization for perfect optics, but potential effects like optical distortions that may be introduced by the quasi-optical wave plate warrant further investigation. A fast method for tolerancing time-invariant systematic effects is presented, which propagates errors through to power spectra and cosmological parameters. The method extends previous studies to an arbitrary scan strategy, and eliminates the need for time-consuming Monte Carlo simulations in the early phases of instrument and survey design. We illustrate the method with both simple parametrized forms for the systematics and with beams based on physical-optics simulations. Example results are given in the context of next-generation experiments targeting tensor-to-scalar ratios r ~ 0.01.

  1. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  2. Estimating IMU heading error from SAR images.

    SciTech Connect

    Doerry, Armin Walter

    2009-03-01

    Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.

  3. ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES

    SciTech Connect

    Dolphin, Andrew E.

    2012-05-20

    In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.

  4. Quantifying Systematic Errors and Total Uncertainties in Satellite-based Precipitation Measurements

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Peters-Lidard, C. D.

    2010-12-01

    Determining the uncertainties in precipitation measurements by satellite remote sensing is of fundamental importance to many applications. These uncertainties result mostly from the interplay of systematic errors and random errors. In this presentation, we will summarize our recent efforts in quantifying the error characteristics in satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMaP). For systematic errors, we devised an error decomposition to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals more error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. Our analysis reveals that the six different products share many error features. For example, they all detected strong precipitation (> 40 mm/day) well, but with various biases. They tend to over-estimate in summer and under-estimate in winter. They miss a significant amount of light precipitation (< 10 mm/day). In addition, hit biases and missed precipitation are the two leading error sources. However, their systematic errors also exhibit substantial differences, especially in winter and over rough topography, which greatly contribute to the uncertainties. To estimate the measurement uncertainties, we calculated the measurement spread from the ensemble of these six quasi-independent products. A global map of measurement uncertainties was thus produced. The map yields a global view of the error characteristics and their regional and seasonal variations, and reveals many undocumented error features over areas with no validation data available. The uncertainties are relatively small (40-60%) over the

  5. Analysis of Measurement Error and Estimator Shape in Three-Point Hydraulic Gradient Estimators

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Wahi, A. K.

    2003-12-01

    Three spatially separated measurements of head provide a means of estimating the magnitude and orientation of the hydraulic gradient. Previous work with three-point estimators has focused on the effect of the size (area) of the three-point estimator and measurement error on the final estimates of the gradient magnitude and orientation in laboratory and field studies (Mizell, 1980; Silliman and Frost, 1995; Silliman and Mantz, 2000; Ruskauff and Rumbaugh, 1996). However, a systematic analysis of the combined effects of measurement error, estimator shape and estimator orientation relative to the gradient orientation has not previously been conducted. Monte Carlo simulation with an underlying assumption of a homogeneous transmissivity field is used to examine the effects of uncorrelated measurement error on a series of eleven different three-point estimators having the same size but different shapes as a function of the orientation of the true gradient. Results show that the variance in the estimate of both the magnitude and the orientation increase linearly with the increase in measurement error in agreement with the results of stochastic theory for estimators that are small relative to the correlation length of transmissivity (Mizell, 1980). Three-point estimator shapes with base to height ratios between 0.5 and 5.0 provide accurate estimates of magnitude and orientation across all orientations of the true gradient. As an example, these results are applied to data collected from a monitoring network of 25 wells at the WIPP site during two different time periods. The simulation results are used to reduce the set of all possible combinations of three wells to those combinations with acceptable measurement errors relative to the amount of head drop across the estimator and base to height ratios between 0.5 and 5.0. These limitations reduce the set of all possible well combinations by 98 percent and show that size alone as defined by triangle area is not a valid

  6. Inertial and Magnetic Sensor Data Compression Considering the Estimation Error

    PubMed Central

    Suh, Young Soo

    2009-01-01

    This paper presents a compression method for inertial and magnetic sensor data, where the compressed data are used to estimate some states. When sensor data are bounded, the proposed compression method guarantees that the compression error is smaller than a prescribed bound. The manner in which this error bound affects the bit rate and the estimation error is investigated. Through the simulation, it is shown that the estimation error is improved by 18.81% over a test set of 12 cases compared with a filter that does not use the compression error bound. PMID:22454564

  7. A Note on Confidence Interval Estimation and Margin of Error

    ERIC Educational Resources Information Center

    Gilliland, Dennis; Melfi, Vince

    2010-01-01

    Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation and…

  8. CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes

    NASA Technical Reports Server (NTRS)

    Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.

    2012-01-01

    Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.

  9. The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.

    PubMed

    Faver, John C; Yang, Wei; Merz, Kenneth M

    2012-10-01

    Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment.

  10. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  11. Analysis and Correction of Systematic Height Model Errors

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.

    2016-06-01

    The geometry of digital height models (DHM) determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC). Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3) has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP), but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM) digital surface model (DSM) or the new AW3D30 DSM, based on ALOS PRISM images, are

  12. Semiclassical Dynamicswith Exponentially Small Error Estimates

    NASA Astrophysics Data System (ADS)

    Hagedorn, George A.; Joye, Alain

    We construct approximate solutions to the time-dependent Schrödingerequation for small values of ħ. If V satisfies appropriate analyticity and growth hypotheses and , these solutions agree with exact solutions up to errors whose norms are bounded by for some C and γ>0. Under more restrictive hypotheses, we prove that for sufficiently small T', implies the norms of the errors are bounded by for some C', γ'>0, and σ > 0.

  13. Estimating errors in least-squares fitting

    NASA Technical Reports Server (NTRS)

    Richter, P. H.

    1995-01-01

    While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.

  14. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  15. Statistical and systematic errors in redshift-space distortion measurements from large surveys

    NASA Astrophysics Data System (ADS)

    Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.

    2012-12-01

    We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate β = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function ξ(rp, π) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on β as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.

  16. An Ensemble-type Approach to Numerical Error Estimation

    NASA Astrophysics Data System (ADS)

    Ackmann, J.; Marotzke, J.; Korn, P.

    2015-12-01

    The estimation of the numerical error in a specific physical quantity of interest (goal) is of key importance in geophysical modelling. Towards this aim, we have formulated an algorithm that combines elements of the classical dual-weighted error estimation with stochastic methods. Our algorithm is based on the Dual-weighted Residual method in which the residual of the model solution is weighed by the adjoint solution, i.e. by the sensitivities of the goal towards the residual. We extend this method by modelling the residual as a stochastic process. Parameterizing the residual by a stochastic process was motivated by the Mori-Zwanzig formalism from statistical mechanics.Here, we apply our approach to two-dimensional shallow-water flows with lateral boundaries and an eddy viscosity parameterization. We employ different parameters of the stochastic process for different dynamical regimes in different regions. We find that for each region the temporal fluctuations of local truncation errors (discrete residuals) can be interpreted stochastically by a Laplace-distributed random variable. Assuming that these random variables are fully correlated in time leads to a stochastic process that parameterizes a problem-dependent temporal evolution of local truncation errors. The parameters of this stochastic process are estimated from short, near-initial, high-resolution simulations. Under the assumption that the estimated parameters can be extrapolated to the full time window of the error estimation, the estimated stochastic process is proven to be a valid surrogate for the local truncation errors.Replacing the local truncation errors by a stochastic process puts our method within the class of ensemble methods and makes the resulting error estimator a random variable. The result of our error estimator is thus a confidence interval on the error in the respective goal. We will show error estimates for two 2D ocean-type experiments and provide an outlook for the 3D case.

  17. TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW

    PubMed Central

    Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten

    2012-01-01

    Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869

  18. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  19. Using Laser Scanners to Augment the Systematic Error Pointing Model

    NASA Astrophysics Data System (ADS)

    Wernicke, D. R.

    2016-08-01

    The antennas of the Deep Space Network (DSN) rely on precise pointing algorithms to communicate with spacecraft that are billions of miles away. Although the existing systematic error pointing model is effective at reducing blind pointing errors due to static misalignments, several of its terms have a strong dependence on seasonal and even daily thermal variation and are thus not easily modeled. Changes in the thermal state of the structure create a separation from the model and introduce a varying pointing offset. Compensating for this varying offset is possible by augmenting the pointing model with laser scanners. In this approach, laser scanners mounted to the alidade measure structural displacements while a series of transformations generate correction angles. Two sets of experiments were conducted in August 2015 using commercially available laser scanners. When compared with historical monopulse corrections under similar conditions, the computed corrections are within 3 mdeg of the mean. However, although the results show promise, several key challenges relating to the sensitivity of the optical equipment to sunlight render an implementation of this approach impractical. Other measurement devices such as inclinometers may be implementable at a significantly lower cost.

  20. Using ridge regression in systematic pointing error corrections

    NASA Technical Reports Server (NTRS)

    Guiar, C. N.

    1988-01-01

    A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.

  1. Reducing impacts of systematic errors in the observation data on inversing ecosystem model parameters using different normalization methods

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Xu, M.; Huang, M.; Yu, G.

    2009-11-01

    Modeling ecosystem carbon cycle on the regional and global scales is crucial to the prediction of future global atmospheric CO2 concentration and thus global temperature which features large uncertainties due mainly to the limitations in our knowledge and in the climate and ecosystem models. There is a growing body of research on parameter estimation against available carbon measurements to reduce model prediction uncertainty at regional and global scales. However, the systematic errors with the observation data have rarely been investigated in the optimization procedures in previous studies. In this study, we examined the feasibility of reducing the impact of systematic errors on parameter estimation using normalization methods, and evaluated the effectiveness of three normalization methods (i.e. maximum normalization, min-max normalization, and z-score normalization) on inversing key parameters, for example the maximum carboxylation rate (Vcmax,25) at a reference temperature of 25°C, in a process-based ecosystem model for deciduous needle-leaf forests in northern China constrained by the leaf area index (LAI) data. The LAI data used for parameter estimation were composed of the model output LAI (truth) and various designated systematic errors and random errors. We found that the estimation of Vcmax,25 could be severely biased with the composite LAI if no normalization was taken. Compared with the maximum normalization and the min-max normalization methods, the z-score normalization method was the most robust in reducing the impact of systematic errors on parameter estimation. The most probable values of estimated Vcmax,25 inversed by the z-score normalized LAI data were consistent with the true parameter values as in the model inputs though the estimation uncertainty increased with the magnitudes of random errors in the observations. We concluded that the z-score normalization method should be applied to the observed or measured data to improve model parameter

  2. Approaches to relativistic positioning around Earth and error estimations

    NASA Astrophysics Data System (ADS)

    Puchades, Neus; Sáez, Diego

    2016-01-01

    In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.

  3. Estimates of Random Error in Satellite Rainfall Averages

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.

    2003-01-01

    Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.

  4. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  5. Nonparametric Item Response Curve Estimation with Correction for Measurement Error

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…

  6. Minor Planet Observations to Identify Reference System Systematic Errors

    NASA Astrophysics Data System (ADS)

    Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.

    2011-04-01

    In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.

  7. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  8. Systematic Errors in GNSS Radio Occultation Data - Part 2

    NASA Astrophysics Data System (ADS)

    Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc

    2014-05-01

    The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the

  9. Using doppler radar images to estimate aircraft navigational heading error

    DOEpatents

    Doerry, Armin W.; Jordan, Jay D.; Kim, Theodore J.

    2012-07-03

    A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.

  10. A study of systematic errors in the PMD CamBoard nano

    NASA Astrophysics Data System (ADS)

    Chow, Jacky C. K.; Lichti, Derek D.

    2013-04-01

    Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.

  11. Systematics for checking geometric errors in CNC lathes

    NASA Astrophysics Data System (ADS)

    Araújo, R. P.; Rolim, T. L.

    2015-10-01

    Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.

  12. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  13. Stability and error estimation for Component Adaptive Grid methods

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph; Zhu, Xiaolei

    1994-01-01

    Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.

  14. PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG

    SciTech Connect

    Mighell, Kenneth J.; Plavchan, Peter

    2013-06-15

    The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.

  15. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  16. Functional error estimators for the adaptive discretization of inverse problems

    NASA Astrophysics Data System (ADS)

    Clason, Christian; Kaltenbacher, Barbara; Wachsmuth, Daniel

    2016-10-01

    So-called functional error estimators provide a valuable tool for reliably estimating the discretization error for a sum of two convex functions. We apply this concept to Tikhonov regularization for the solution of inverse problems for partial differential equations, not only for quadratic Hilbert space regularization terms but also for nonsmooth Banach space penalties. Examples include the measure-space norm (i.e., sparsity regularization) or the indicator function of an {L}∞ ball (i.e., Ivanov regularization). The error estimators can be written in terms of residuals in the optimality system that can then be estimated by conventional techniques, thus leading to explicit estimators. This is illustrated by means of an elliptic inverse source problem with the above-mentioned penalties, and numerical results are provided for the case of sparsity regularization.

  17. Improving SMOS retrieved salinity: characterization of systematic errors in reconstructed and modelled brightness temperature images

    NASA Astrophysics Data System (ADS)

    Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.

    2012-04-01

    boundaries such as the Sky-Earth boundary. Data acquired over the Ocean rather than over Land are prefered to characterize such errors because the variability of the emissivity sensed over the oceanic domain is an order of magnitude smaller than over land. Nevertheless, characterizing such errors over the Ocean is not a trivial task. Even if the natural variability is small, it is larger than the errors to be characterized and the characterization strategy must account for it otherwise the estimated patterns will unfortunately vary significantly with the selected dataset. The communication will present results on a systematic error characterization methodology allowing stable error pattern estimates. Particular focus will be given to the critical data selection strategy and the analysis of the X- and Y-pol patterns obtained over a wide range of SMOS subdatasets. Impact of some image reconstruction options will be evaluated. It will be shown how the methodology is also an interesting tool to diagnose specific error sources. Criticality of accurate description of Faraday rotation effects will be evidenced and latest results about the possibility to infer such information from full Stokes vector will be presented.

  18. Factor Loading Estimation Error and Stability Using Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Sass, Daniel A.

    2010-01-01

    Exploratory factor analysis (EFA) is commonly employed to evaluate the factor structure of measures with dichotomously scored items. Generally, only the estimated factor loadings are provided with no reference to significance tests, confidence intervals, and/or estimated factor loading standard errors. This simulation study assessed factor loading…

  19. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large

  20. Application of Bayesian Systematic Error Correction to Kepler Photometry

    NASA Astrophysics Data System (ADS)

    Van Cleve, Jeffrey E.; Jenkins, J. M.; Twicken, J. D.; Smith, J. C.; Fanelli, M. N.

    2011-01-01

    In a companion talk (Jenkins et al.), we present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of intrinsically quiet and highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. In this poster, we illustrate the concept in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through June 2010. We define the correlation function between normalized, mean-removed light curves and select a subset of highly correlated stars. This ensemble of light curves can then be combined with ancillary engineering data and image motion polynomials to form a design matrix from which the principal components are extracted by reduced-rank SVD decomposition. MAP is then represented in the resulting orthonormal basis, and applied to the set of all light curves. We show that the correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. We then show the benefits of MAP applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability. After high-pass filtering the MAP output, we show that MAP does not increase noise on transit time scales, compared to LS. We conclude with a discussion of current work selecting input vectors for the design matrix, representing and numerically solving MAP for non-Gaussian probability distribution functions (PDFs), and suppressing high-frequency noise injection with Lagrange multipliers. Funding for this mission is provided by NASA, Science Mission Directorate.

  1. Error decomposition and estimation of inherent optical properties.

    PubMed

    Salama, Mhd Suhyb; Stein, Alfred

    2009-09-10

    We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation. PMID:19745859

  2. Error Estimation for Reduced Order Models of Dynamical systems

    SciTech Connect

    Homescu, C; Petzold, L R; Serban, R

    2003-12-16

    The use of reduced order models to describe a dynamical system is pervasive in science and engineering. Often these models are used without an estimate of their error or range of validity. In this paper we consider dynamical systems and reduced models built using proper orthogonal decomposition. We show how to compute estimates and bounds for these errors, by a combination of the small sample statistical condition estimation method and of error estimation using the adjoint method. More importantly, the proposed approach allows the assessment of so-called regions of validity for reduced models, i.e., ranges of perturbations in the original system over which the reduced model is still appropriate. This question is particularly important for applications in which reduced models are used not just to approximate the solution to the system that provided the data used in constructing the reduced model, but rather to approximate the solution of systems perturbed from the original one. Numerical examples validate our approach: the error norm estimates approximate well the forward error while the derived bounds are within an order of magnitude.

  3. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the

  4. Global Warming Estimation from MSU: Correction for Drift and Calibration Errors

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the

  5. Sampling errors in satellite estimates of tropical rain

    NASA Technical Reports Server (NTRS)

    Mcconnell, Alan; North, Gerald R.

    1987-01-01

    The GATE rainfall data set is used in a statistical study to estimate the sampling errors that might be expected for the type of snapshot sampling that a low earth-orbiting satellite makes. For averages over the entire 400-km square and for the duration of several weeks, strong evidence is found that sampling errors less than 10 percent can be expected in contributions from each of four rain rate categories which individually account for about one quarter of the total rain.

  6. Estimation of rod scale errors in geodetic leveling

    USGS Publications Warehouse

    Craymer, Michael R.; Vaníček, Petr; Castle, Robert O.

    1995-01-01

    Comparisons among repeated geodetic levelings have often been used for detecting and estimating residual rod scale errors in leveled heights. Individual rod-pair scale errors are estimated by a two-step procedure using a model based on either differences in heights, differences in section height differences, or differences in section tilts. It is shown that the estimated rod-pair scale errors derived from each model are identical only when the data are correctly weighted, and the mathematical correlations are accounted for in the model based on heights. Analyses based on simple regressions of changes in height versus height can easily lead to incorrect conclusions. We also show that the statistically estimated scale errors are not a simple function of height, height difference, or tilt. The models are valid only when terrain slope is constant over adjacent pairs of setups (i.e., smoothly varying terrain). In order to discriminate between rod scale errors and vertical displacements due to crustal motion, the individual rod-pairs should be used in more than one leveling, preferably in areas of contrasting tectonic activity. From an analysis of 37 separately calibrated rod-pairs used in 55 levelings in southern California, we found eight statistically significant coefficients that could be reasonably attributed to rod scale errors, only one of which was larger than the expected random error in the applied calibration-based scale correction. However, significant differences with other independent checks indicate that caution should be exercised before accepting these results as evidence of scale error. Further refinements of the technique are clearly needed if the results are to be routinely applied in practice.

  7. A systematic impact assessment of GRACE error correlation on data assimilation in hydrological models

    NASA Astrophysics Data System (ADS)

    Schumacher, Maike; Kusche, Jürgen; Döll, Petra

    2016-06-01

    Recently, ensemble Kalman filters (EnKF) have found increasing application for merging hydrological models with total water storage anomaly (TWSA) fields from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. Previous studies have disregarded the effect of spatially correlated errors of GRACE TWSA products in their investigations. Here, for the first time, we systematically assess the impact of the GRACE error correlation structure on EnKF data assimilation into a hydrological model, i.e. on estimated compartmental and total water storages and model parameter values. Our investigations include (1) assimilating gridded GRACE-derived TWSA into the WaterGAP Global Hydrology Model and, simultaneously, calibrating its parameters; (2) introducing GRACE observations on different spatial scales; (3) modelling observation errors as either spatially white or correlated in the assimilation procedure, and (4) replacing the standard EnKF algorithm by the square root analysis scheme or, alternatively, the singular evolutive interpolated Kalman filter. Results of a synthetic experiment designed for the Mississippi River Basin indicate that the hydrological parameters are sensitive to TWSA assimilation if spatial resolution of the observation data is sufficiently high. We find a significant influence of spatial error correlation on the adjusted water states and model parameters for all implemented filter variants, in particular for subbasins with a large discrepancy between observed and initially simulated TWSA and for north-south elongated sub-basins. Considering these correlated errors, however, does not generally improve results: while some metrics indicate that it is helpful to consider the full GRACE error covariance matrix, it appears to have an adverse effect on others. We conclude that considering the characteristics of GRACE error correlation is at least as important as the selection of the spatial discretisation of TWSA observations, while the choice

  8. Verification of unfold error estimates in the unfold operator code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}

  9. Verification of unfold error estimates in the unfold operator code

    NASA Astrophysics Data System (ADS)

    Fehl, D. L.; Biggs, F.

    1997-01-01

    Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.

  10. An analysis of the least-squares problem for the DSN systematic pointing error model

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1991-01-01

    A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.

  11. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    NASA Astrophysics Data System (ADS)

    Kanphet, J.; Suriyapee, S.; Dumrongkijudom, N.; Sanghangthum, T.; Kumkhwao, J.; Wisetrintong, M.

    2016-03-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable.

  12. Error propagation and scaling for tropical forest biomass estimates.

    PubMed Central

    Chave, Jerome; Condit, Richard; Aguilar, Salomon; Hernandez, Andres; Lao, Suzanne; Perez, Rolando

    2004-01-01

    The above-ground biomass (AGB) of tropical forests is a crucial variable for ecologists, biogeochemists, foresters and policymakers. Tree inventories are an efficient way of assessing forest carbon stocks and emissions to the atmosphere during deforestation. To make correct inferences about long-term changes in biomass stocks, it is essential to know the uncertainty associated with AGB estimates, yet this uncertainty is rarely evaluated carefully. Here, we quantify four types of uncertainty that could lead to statistical error in AGB estimates: (i) error due to tree measurement; (ii) error due to the choice of an allometric model relating AGB to other tree dimensions; (iii) sampling uncertainty, related to the size of the study plot; (iv) representativeness of a network of small plots across a vast forest landscape. In previous studies, these sources of error were reported but rarely integrated into a consistent framework. We estimate all four terms in a 50 hectare (ha, where 1 ha = 10(4) m2) plot on Barro Colorado Island, Panama, and in a network of 1 ha plots scattered across central Panama. We find that the most important source of error is currently related to the choice of the allometric model. More work should be devoted to improving the predictive power of allometric models for biomass. PMID:15212093

  13. Error estimation for the linearized auto-localization algorithm.

    PubMed

    Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  14. Error estimation for the linearized auto-localization algorithm.

    PubMed

    Guevara, Jorge; Jiménez, Antonio R; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  15. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  16. A review on the impact of systematic safety processes for the control of error in medicine.

    PubMed

    Damiani, Gianfranco; Pinnarelli, Luigi; Scopelliti, Lucia; Sommella, Lorenzo; Ricciardi, Walter

    2009-07-01

    Among risk management initiatives, systematic safety processes (SSPs), implemented within health care organizations, could be useful in managing patient safety. The purpose of this article is to conduct a systematic literature review assessing the impact of SSPs on different error categories. Articles that investigated the relation between SSPs, clinical and organizational outcomes were selected from scientific literature. The proportion and impact of proactive and reactive SSPs were calculated among five error categories. Proactive interventions impacted more positively than reactive ones in reducing medication errors, technical errors and errors due to personnel. PSSPs and RSSPs had similar effects in reducing errors related to a wrong procedure. A single reactive study influenced non-positively communication errors. A relevant prevalence of the impact of proactive processes on reactive ones is reported. This article can help decision makers in identifying which SSP can be the most appropriate against specific error categories. PMID:19564841

  17. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    NASA Technical Reports Server (NTRS)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  18. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  19. Condition and Error Estimates in Numerical Matrix Computations

    SciTech Connect

    Konstantinov, M. M.; Petkov, P. H.

    2008-10-30

    This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.

  20. Error analysis for the Fourier domain offset estimation algorithm

    NASA Astrophysics Data System (ADS)

    Wei, Ling; He, Jieling; He, Yi; Yang, Jinsheng; Li, Xiqi; Shi, Guohua; Zhang, Yudong

    2016-02-01

    The offset estimation algorithm is crucial for the accuracy of the Shack-Hartmann wave-front sensor. Recently, the Fourier Domain Offset (FDO) algorithm has been proposed for offset estimation. Similar to other algorithms, the accuracy of FDO is affected by noise such as background noise, photon noise, and 'fake' spots. However, no adequate quantitative error analysis has been performed for FDO in previous studies, which is of great importance for practical applications of the FDO. In this study, we quantitatively analysed how the estimation error of FDO is affected by noise based on theoretical deduction, numerical simulation, and experiments. The results demonstrate that the standard deviation of the wobbling error is: (1) inversely proportional to the raw signal to noise ratio, and proportional to the square of the sub-aperture size in the presence of background noise; and (2) proportional to the square root of the intensity in the presence of photonic noise. Furthermore, the upper bound of the estimation error is proportional to the intensity of 'fake' spots and the sub-aperture size. The results of the simulation and experiments agreed with the theoretical analysis.

  1. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2008-03-01

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  2. Concise Formulas for the Standard Errors of Component Loading Estimates.

    ERIC Educational Resources Information Center

    Ogasawara, Haruhiko

    2002-01-01

    Derived formulas for the asymptotic standard errors of component loading estimates to cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. Used the formulas with a real correlation matrix of 355 subjects who took 12 psychological tests. (SLD)

  3. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  4. Estimating Filtering Errors Using the Peano Kernel Theorem

    SciTech Connect

    Jerome Blair

    2009-02-20

    The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.

  5. Experimental investigation of the systematic error on photomechanic methods induced by camera self-heating.

    PubMed

    Ma, Qinwei; Ma, Shaopeng

    2013-03-25

    The systematic error for photomechanic methods caused by self-heating induced image expansion when using a digital camera was systematically studied, and a new physical model to explain the mechanism has been proposed and verified. The experimental results showed that the thermal expansion of the camera outer case and lens mount, instead of mechanical components within the camera, were the main reason for image expansion. The corresponding systematic error for both image analysis and fringe analysis based photomechanic methods were analyzed and measured, then error compensation techniques were proposed and verified.

  6. When should systematic patient positioning errors in radiotherapy be corrected?

    PubMed

    Bortfeld, Thomas; van Herk, Marcel; Jiang, Steve B

    2002-12-01

    One way to reduce patient set-up errors in radiotherapy is to measure the position during the first N treatment fractions, and to do an unconditional correction of the set-up position once at the (N + 1)th fraction. This strategy is known as the 'no action level' protocol. The question is when to do the correction, i.e. what is the optimum value of N? We determine N by minimizing the expectation value of the total quadratic set-up error taken over all fractions. A central assumption that we make is that there is no time trend in the patient set-up. The result is a simple formula for the value of N, which is proportional to the square root of the total number of fractions, and to the ratio of the execution (delivery) error and preparation error. We also provide a formula for cases where the measurement error is not negligible. For typical cases the optimum value is N = 4. Because the optimum is shallow, the exact choice of N is uncritical.

  7. Error estimates and specification parameters for functional renormalization

    SciTech Connect

    Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; Wetterich, Christof

    2013-07-15

    We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.

  8. Analysis of possible systematic errors in the Oslo method

    SciTech Connect

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-03-15

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  9. An Anisotropic A posteriori Error Estimator for CFD

    NASA Astrophysics Data System (ADS)

    Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando

    In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.

  10. On causes of the origin of systematic errors in latitude determination with the Moscow PZT.

    NASA Astrophysics Data System (ADS)

    Volchkov, A. A.; Gutsalo, G. A.

    Peculiarities of eye response during visual measurements of star positions on photographic plates are considered. It is shown that variations of the plate background density can be a source of systematic errors during latitude determinations with a PZT.

  11. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  12. Motion estimation performance models with application to hardware error tolerance

    NASA Astrophysics Data System (ADS)

    Cheong, Hye-Yeon; Ortega, Antonio

    2007-01-01

    The progress of VLSI technology towards deep sub-micron feature sizes, e.g., sub-100 nanometer technology, has created a growing impact of hardware defects and fabrication process variability, which lead to reductions in yield rate. To address these problems, a new approach, system-level error tolerance (ET), has been recently introduced. Considering that a significant percentage of the entire chip production is discarded due to minor imperfections, this approach is based on accepting imperfect chips that introduce imperceptible/acceptable system-level degradation; this leads to increases in overall effective yield. In this paper, we investigate the impact of hardware faults on the video compression performance, with a focus on the motion estimation (ME) process. More specifically, we provide an analytical formulation of the impact of single and multiple stuck-at-faults within ME computation. We further present a model for estimating the system-level performance degradation due to such faults, which can be used for the error tolerance based decision strategy of accepting a given faulty chip. We also show how different faults and ME search algorithms compare in terms of error tolerance and define the characteristics of search algorithm that lead to increased error tolerance. Finally, we show that different hardware architectures performing the same metric computation have different error tolerance characteristics and we present the optimal ME hardware architecture in terms of error tolerance. While we focus on ME hardware, our work could also applied to systems (e.g., classifiers, matching pursuits, vector quantization) where a selection is made among several alternatives (e.g., class label, basis function, quantization codeword) based on which choice minimizes an additive metric of interest.

  13. Test models for improving filtering with model errors through stochastic parameter estimation

    SciTech Connect

    Gershgorin, B.; Harlim, J. Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  14. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  15. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  16. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  17. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  18. Divergent estimation error in portfolio optimization and in linear regression

    NASA Astrophysics Data System (ADS)

    Kondor, I.; Varga-Haszonits, I.

    2008-08-01

    The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.

  19. Medication errors in paediatric care: a systematic review of epidemiology and an evaluation of evidence supporting reduction strategy recommendations

    PubMed Central

    Miller, Marlene R; Robinson, Karen A; Lubomski, Lisa H; Rinke, Michael L; Pronovost, Peter J

    2007-01-01

    Background Although children are at the greatest risk for medication errors, little is known about the overall epidemiology of these errors, where the gaps are in our knowledge, and to what extent national medication error reduction strategies focus on children. Objective To synthesise peer reviewed knowledge on children's medication errors and on recommendations to improve paediatric medication safety by a systematic literature review. Data sources PubMed, Embase and Cinahl from 1 January 2000 to 30 April 2005, and 11 national entities that have disseminated recommendations to improve medication safety. Study selection Inclusion criteria were peer reviewed original data in English language. Studies that did not separately report paediatric data were excluded. Data extraction Two reviewers screened articles for eligibility and for data extraction, and screened all national medication error reduction strategies for relevance to children. Data synthesis From 358 articles identified, 31 were included for data extraction. The definition of medication error was non‐uniform across the studies. Dispensing and administering errors were the most poorly and non‐uniformly evaluated. Overall, the distributional epidemiological estimates of the relative percentages of paediatric error types were: prescribing 3–37%, dispensing 5–58%, administering 72–75%, and documentation 17–21%. 26 unique recommendations for strategies to reduce medication errors were identified; none were based on paediatric evidence. Conclusions Medication errors occur across the entire spectrum of prescribing, dispensing, and administering, are common, and have a myriad of non‐evidence based potential reduction strategies. Further research in this area needs a firmer standardisation for items such as dose ranges and definitions of medication errors, broader scope beyond inpatient prescribing errors, and prioritisation of implementation of medication error reduction strategies. PMID:17403758

  20. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  1. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    PubMed

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  2. On systematic errors in spectral line parameters retrieved with the Voigt line profile

    NASA Astrophysics Data System (ADS)

    Kochanov, V. P.

    2012-08-01

    Systematic errors inherent in the Voigt line profile are analyzed. Molecular spectrum processing with the Voigt profile is shown to underestimate line intensities by 1-4%, with the errors in line positions being 0.0005 cm-1 and the decrease in pressure broadening coefficients varying from 5% to 55%.

  3. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    NASA Astrophysics Data System (ADS)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  4. Discretization error estimation and exact solution generation using the method of nearby problems.

    SciTech Connect

    Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.

    2011-10-01

    The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.

  5. Augmented GNSS differential corrections minimum mean square error estimation sensitivity to spatial correlation modeling errors.

    PubMed

    Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco

    2014-06-11

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  6. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    PubMed Central

    Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco

    2014-01-01

    Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454

  7. Efficiently estimating salmon escapement uncertainty using systematically sampled data

    USGS Publications Warehouse

    Reynolds, Joel H.; Woody, Carol Ann; Gove, Nancy E.; Fair, Lowell F.

    2007-01-01

    Fish escapement is generally monitored using nonreplicated systematic sampling designs (e.g., via visual counts from towers or hydroacoustic counts). These sampling designs support a variety of methods for estimating the variance of the total escapement. Unfortunately, all the methods give biased results, with the magnitude of the bias being determined by the underlying process patterns. Fish escapement commonly exhibits positive autocorrelation and nonlinear patterns, such as diurnal and seasonal patterns. For these patterns, poor choice of variance estimator can needlessly increase the uncertainty managers have to deal with in sustaining fish populations. We illustrate the effect of sampling design and variance estimator choice on variance estimates of total escapement for anadromous salmonids from systematic samples of fish passage. Using simulated tower counts of sockeye salmon Oncorhynchus nerka escapement on the Kvichak River, Alaska, five variance estimators for nonreplicated systematic samples were compared to determine the least biased. Using the least biased variance estimator, four confidence interval estimators were compared for expected coverage and mean interval width. Finally, five systematic sampling designs were compared to determine the design giving the smallest average variance estimate for total annual escapement. For nonreplicated systematic samples of fish escapement, all variance estimators were positively biased. Compared to the other estimators, the least biased estimator reduced bias by, on average, from 12% to 98%. All confidence intervals gave effectively identical results. Replicated systematic sampling designs consistently provided the smallest average estimated variance among those compared.

  8. Local error estimates for discontinuous solutions of nonlinear hyperbolic equations

    NASA Technical Reports Server (NTRS)

    Tadmor, Eitan

    1989-01-01

    Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.

  9. Close-range radar rainfall estimation and error analysis

    NASA Astrophysics Data System (ADS)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2016-08-01

    Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1-2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data. Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific Z-R relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall-Palmer Z-R relation. Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge

  10. DtaRefinery, a Software Tool for Elimination of Systematic Errors from Parent Ion Mass Measurements in Tandem Mass Spectra Data Sets*

    PubMed Central

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2010-01-01

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and high throughput MS/MS fragmentation have become widely available in recent years, allowing for significantly better discrimination between true and false MS/MS peptide identifications by the application of a relatively narrow window for maximum allowable deviations of measured parent ion masses. To fully gain the advantage of highly accurate parent ion mass measurements, it is important to limit systematic mass measurement errors. Based on our previous studies of systematic biases in mass measurement errors, here, we have designed an algorithm and software tool that eliminates the systematic errors from the peptide ion masses in MS/MS data. We demonstrate that the elimination of the systematic mass measurement errors allows for the use of tighter criteria on the deviation of measured mass from theoretical monoisotopic peptide mass, resulting in a reduction of both false discovery and false negative rates of peptide identification. A software implementation of this algorithm called DtaRefinery reads a set of fragmentation spectra, searches for MS/MS peptide identifications using a FASTA file containing expected protein sequences, fits a regression model that can estimate systematic errors, and then corrects the parent ion mass entries by removing the estimated systematic error components. The output is a new file with fragmentation spectra with updated parent ion masses. The software is freely available. PMID:20019053

  11. Second-order systematic errors in Mueller matrix dual rotating compensator ellipsometry.

    PubMed

    Broch, Laurent; En Naciri, Aotmane; Johann, Luc

    2010-06-10

    We investigate the systematic errors at the second order for a Mueller matrix ellipsometer in the dual rotating compensator configuration. Starting from a general formalism, we derive explicit second-order errors in the Mueller matrix coefficients of a given sample. We present the errors caused by the azimuthal inaccuracy of the optical components and their influences on the measurements. We demonstrate that the methods based on four-zone or two-zone averaging measurement are effective to vanish the errors due to the compensators. For the other elements, it is shown that the systematic errors at the second order can be canceled only for some coefficients of the Mueller matrix. The calibration step for the analyzer and the polarizer is developed. This important step is necessary to avoid the azimuthal inaccuracy in such elements. Numerical simulations and experimental measurements are presented and discussed.

  12. Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model

    NASA Technical Reports Server (NTRS)

    Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long

    2001-01-01

    This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.

  13. Improved Soundings and Error Estimates using AIRS/AMSU Data

    NASA Technical Reports Server (NTRS)

    Susskind, Joel

    2006-01-01

    AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.

  14. Effects of measurement error on estimating biological half-life

    SciTech Connect

    Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. )

    1992-10-01

    Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.

  15. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    ERIC Educational Resources Information Center

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  16. Verification of unfold error estimates in the UFO code

    SciTech Connect

    Fehl, D.L.; Biggs, F.

    1996-07-01

    Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.

  17. Bootstrap Standard Error Estimates in Dynamic Factor Analysis.

    PubMed

    Zhang, Guangjian; Browne, Michael W

    2010-05-28

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the interdependence of successive observations. Bootstrap methods can fill this need, however. The standard bootstrap of individual timepoints is not appropriate because it destroys their order in time and consequently gives incorrect standard error estimates. Two bootstrap procedures that are appropriate for dynamic factor analysis are described. The moving block bootstrap breaks down the original time series into blocks and draws samples of blocks instead of individual timepoints. A parametric bootstrap is essentially a Monte Carlo study in which the population parameters are taken to be estimates obtained from the available sample. These bootstrap procedures are demonstrated using 103 days of affective mood self-ratings from a pregnant woman, 90 days of personality self-ratings from a psychology freshman, and a simulation study.

  18. A study for systematic errors of the GLA forecast model in tropical regions

    NASA Technical Reports Server (NTRS)

    Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin

    1988-01-01

    From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.

  19. SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors

    SciTech Connect

    Kathuria, K; Siebers, J

    2014-06-01

    Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and are as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated

  20. Reducing Systematic Centroid Errors Induced by Fiber Optic Faceplates in Intensified High-Accuracy Star Trackers

    PubMed Central

    Xiong, Kun; Jiang, Jie

    2015-01-01

    Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920

  1. Error estimates for the Skyrme-Hartree-Fock model

    NASA Astrophysics Data System (ADS)

    Erler, J.; Reinhard, P.-G.

    2015-03-01

    There are many complementary strategies to estimate the extrapolation errors of a model calibrated in least-squares fits. We consider the Skyrme-Hartree-Fock model for nuclear structure and dynamics and exemplify the following five strategies: uncertainties from statistical analysis, covariances between observables, trends of residuals, variation of fit data, and dedicated variation of model parameters. This gives useful insight into the impact of the key fit data as they consist of binding energies, charge rms radii, and charge formfactor. Amongst others, we check in particular the predictive value for observables in the stable nucleus 208Pb, the super-heavy element 266Hs, r-process nuclei, and neutron stars.

  2. SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.

    SciTech Connect

    QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.

    2007-08-25

    Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.

  3. Real-Time Parameter Estimation Using Output Error

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.

    2014-01-01

    Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.

  4. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation over Oceans

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, Max J.; Bacmeister, Julio T.; Chen, Baode; Takacs, Lawrence L.

    2006-01-01

    This study provides explanations for some of the experimental findings of Chao (2000) and Chao and Chen (2001) concerning the mechanisms responsible for the ITCZ in an aqua-planet model. These explanations are then applied to explain the origin of some of the systematic errors in the GCM simulation of ITCZ precipitatin over oceans. The ITCZ systematic errors are highly sensitive to model physics and by extension model horizontal resolution. The findings in this study along with those of Chao (2000) and Chao and Chen (2001, 2004) contribute to building a theoretical foundation for ITCZ study. A few possible methods of alleviating the systematic errors in the GCM simulaiton of ITCZ are discussed. This study uses a recent version of the Goddard Modeling and Assimilation Office's Goddard Earth Observing System (GEOS-5) GCM.

  5. Sources of systematic error in calibrated BOLD based mapping of baseline oxygen extraction fraction.

    PubMed

    Blockley, Nicholas P; Griffeth, Valerie E M; Stone, Alan J; Hare, Hannah V; Bulte, Daniel P

    2015-11-15

    Recently a new class of calibrated blood oxygen level dependent (BOLD) functional magnetic resonance imaging (MRI) methods were introduced to quantitatively measure the baseline oxygen extraction fraction (OEF). These methods rely on two respiratory challenges and a mathematical model of the resultant changes in the BOLD functional MRI signal to estimate the OEF. However, this mathematical model does not include all of the effects that contribute to the BOLD signal, it relies on several physiological assumptions and it may be affected by intersubject physiological variability. The aim of this study was to investigate these sources of systematic error and their effect on estimating the OEF. This was achieved through simulation using a detailed model of the BOLD signal. Large ranges for intersubject variability in baseline physiological parameters such as haematocrit and cerebral blood volume were considered. Despite this the uncertainty in the relationship between the measured BOLD signals and the OEF was relatively low. Investigations of the physiological assumptions that underlie the mathematical model revealed that OEF measurements are likely to be overestimated if oxygen metabolism changes during hypercapnia or cerebral blood flow changes under hyperoxia. Hypoxic hypoxia was predicted to result in an underestimation of the OEF, whilst anaemic hypoxia was found to have only a minimal effect.

  6. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    PubMed

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894

  7. Mechanical temporal fluctuation induced distance and force systematic errors in Casimir force experiments

    NASA Astrophysics Data System (ADS)

    Lamoreaux, Steve; Wong, Douglas

    2015-06-01

    The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented.

  8. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  9. Local error estimates for adaptive simulation of the Reaction–Diffusion Master Equation via operator splitting

    PubMed Central

    Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda

    2015-01-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735

  10. Local error estimates for adaptive simulation of the reaction-diffusion master equation via operator splitting

    NASA Astrophysics Data System (ADS)

    Hellander, Andreas; Lawson, Michael J.; Drawert, Brian; Petzold, Linda

    2014-06-01

    The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps were adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the diffusive finite-state projection (DFSP) method, to incorporate temporal adaptivity.

  11. Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene a.

    2006-01-01

    Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.

  12. First- and second-order error estimates in Monte Carlo integration

    NASA Astrophysics Data System (ADS)

    Bakx, R.; Kleiss, R. H. P.; Versteegen, F.

    2016-11-01

    In Monte Carlo integration an accurate and reliable determination of the numerical integration error is essential. We point out the need for an independent estimate of the error on this error, for which we present an unbiased estimator. In contrast to the usual (first-order) error estimator, this second-order estimator can be shown to be not necessarily positive in an actual Monte Carlo computation. We propose an alternative and indicate how this can be computed in linear time without risk of large rounding errors. In addition, we comment on the relatively very slow convergence of the second-order error estimate.

  13. Patient disclosure of medical errors in paediatrics: A systematic literature review.

    PubMed

    Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah

    2016-05-01

    Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified.

  14. Systematic errors analysis for a large dynamic range aberrometer based on aberration theory.

    PubMed

    Wu, Peng; Liu, Sheng; DeHoog, Edward; Schwiegerling, Jim

    2009-11-10

    In Ref. 1, it was demonstrated that the significant systematic errors of a type of large dynamic range aberrometer are strongly related to the power error (defocus) in the input wavefront. In this paper, a generalized theoretical analysis based on vector aberration theory is presented, and local shift errors of the SH spot pattern as a function of the lenslet position and the local wavefront tilt over the corresponding lenslet are derived. Three special cases, a spherical wavefront, a crossed cylindrical wavefront, and a cylindrical wavefront, are analyzed and the possibly affected Zernike terms in the wavefront reconstruction are investigated. The simulation and experimental results are illustrated to verify the theoretical predictions.

  15. Close-range radar rainfall estimation and error analysis

    NASA Astrophysics Data System (ADS)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2012-04-01

    It is well-known that quantitative precipitation estimation (QPE) is affected by many sources of error. The most important of these are 1) radar calibration, 2) wet radome attenuation, 3) rain attenuation, 4) vertical profile of reflectivity, 5) variations in drop size distribution, and 6) sampling effects. The study presented here is an attempt to separate and quantify these sources of error. For this purpose, QPE is performed very close to the radar (~1-2 km) so that 3), 4), and 6) will only play a minor role. Error source 5) can be corrected for because of the availability of two disdrometers (instruments that measure the drop size distribution). A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm in De Bilt, The Netherlands is analyzed. Radar, rain gauge, and disdrometer data from De Bilt are used for this. It is clear from the analyses that without any corrections, the radar severely underestimates the total rain amount (only 25 mm). To investigate the effect of wet radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation up to ~4 dB. The calibration of the radar is checked by looking at received power from the sun. This turns out to cause another 1 dB of underestimation. The effect of variability of drop size distributions is shown to cause further underestimation. Correcting for all of these effects yields a good match between radar QPE and gauge measurements.

  16. Optimizing MRI-targeted fusion prostate biopsy: the effect of systematic error and anisotropy on tumor sampling

    NASA Astrophysics Data System (ADS)

    Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.

    2015-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.

  17. Sampling errors in rainfall estimates by multiple satellites

    NASA Technical Reports Server (NTRS)

    North, Gerald R.; Shen, Samuel S. P.; Upson, Robert

    1993-01-01

    This paper examines the sampling characteristics of combining data collected by several low-orbiting satellites attempting to estimate the space-time average of rain rates. The several satellites can have different orbital and swath-width parameters. The satellite overpasses are allowed to make partial coverage snapshots of the grid box with each overpass. Such partial visits are considered in an approximate way, letting each intersection area fraction of the grid box by a particular satellite swath be a random variable with mean and variance parameters computed from exact orbit calculations. The derivation procedure is based upon the spectral minimum mean-square error formalism introduced by North and Nakamoto. By using a simple parametric form for the spacetime spectral density, simple formulas are derived for a large number of examples, including the combination of the Tropical Rainfall Measuring Mission with an operational sun-synchronous orbiter. The approximations and results are discussed and directions for future research are summarized.

  18. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-Chao; Bao, Wan-Su; Wang, Xiang; Fu, Xiang-Qun

    2015-06-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. Project supported by the National Basic Research Program of China (Grant No. 2013CB338002).

  19. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  20. Model Error Estimation for the CPTEC Eta Model

    NASA Technical Reports Server (NTRS)

    Tippett, Michael K.; daSilva, Arlindo

    1999-01-01

    Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.

  1. Detecting Positioning Errors and Estimating Correct Positions by Moving Window

    PubMed Central

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282

  2. Detecting Positioning Errors and Estimating Correct Positions by Moving Window.

    PubMed

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research.

  3. Adaptive error covariances estimation methods for ensemble Kalman filters

    SciTech Connect

    Zhen, Yicun; Harlim, John

    2015-08-01

    This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry–Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry–Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry–Sauer's method on L-96 example.

  4. Detecting Positioning Errors and Estimating Correct Positions by Moving Window.

    PubMed

    Song, Ha Yoon; Lee, Jun Seok

    2015-01-01

    In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282

  5. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940

  6. Variance estimation for systematic designs in spatial surveys.

    PubMed

    Fewster, R M

    2011-12-01

    In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena (Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation.

  7. Systematic errors in conductimetric instrumentation due to bubble adhesions on the electrodes: An experimental assessment

    NASA Astrophysics Data System (ADS)

    Neelakantaswamy, P. S.; Rajaratnam, A.; Kisdnasamy, S.; Das, N. P.

    1985-02-01

    Systematic errors in conductimetric measurements are often encountered due to partial screening of interelectrode current paths resulting from adhesion of bubbles on the electrode surfaces of the cell. A method of assessing this error quantitatively by a simulated electrolytic tank technique is proposed here. The experimental setup simulates the bubble-curtain effect in the electrolytic tank by means of a pair of electrodes partially covered by a monolayer of small polystyrene-foam spheres representing the bubble adhesions. By varying the number of spheres stuck on the electrode surface, the fractional area covered by the bubbles is controlled; and by measuring the interelectrode impedance, the systematic error is determined as a function of the fractional area covered by the simulated bubbles. A theoretical model which depicts the interelectrode resistance and, hence, the systematic error caused by bubble adhesions is calculated by considering the random dispersal of bubbles on the electrodes. Relevant computed results are compared with the measured impedance data obtained from the electrolytic tank experiment. Results due to other models are also presented and discussed. A time-domain measurement on the simulated cell to study the capacitive effects of the bubble curtain is also explained.

  8. A Systematic Approach for Model-Based Aircraft Engine Performance Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2010-01-01

    A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter

  9. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    NASA Astrophysics Data System (ADS)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  10. SU-F-BRD-03: Determination of Plan Robustness for Systematic Setup Errors Using Trilinear Interpolation

    SciTech Connect

    Fix, MK; Volken, W; Frei, D; Terribilini, D; Dal Pra, A; Schmuecking, M; Manser, P

    2014-06-15

    Purpose: Treatment plan evaluations in radiotherapy are currently ignoring the dosimetric impact of setup uncertainties. The determination of the robustness for systematic errors is rather computational intensive. This work investigates interpolation schemes to quantify the robustness of treatment plans for systematic errors in terms of efficiency and accuracy. Methods: The impact of systematic errors on dose distributions for patient treatment plans is determined by using the Swiss Monte Carlo Plan (SMCP). Errors in all translational directions are considered, ranging from −3 to +3 mm in mm steps. For each systematic error a full MC dose calculation is performed leading to 343 dose calculations, used as benchmarks. The interpolation uses only a subset of the 343 calculations, namely 9, 15 or 27, and determines all dose distributions by trilinear interpolation. This procedure is applied for a prostate and a head and neck case using Volumetric Modulated Arc Therapy with 2 arcs. The relative differences of the dose volume histograms (DVHs) of the target and the organs at risks are compared. Finally, the interpolation schemes are used to compare robustness of 4- versus 2-arcs in the head and neck treatment plan. Results: Relative local differences of the DVHs increase for decreasing number of dose calculations used in the interpolation. The mean deviations are <1%, 3.5% and 6.5% for a subset of 27, 15 and 9 used dose calculations, respectively. Thereby the dose computation times are reduced by factors of 13, 25 and 43, respectively. The comparison of the 4- versus 2-arcs plan shows a decrease in robustness; however, this is outweighed by the dosimetric improvements. Conclusion: The results of this study suggest that the use of trilinear interpolation to determine the robustness of treatment plans can remarkably reduce the number of dose calculations. This work was supported by Varian Medical Systems. This work was supported by Varian Medical Systems.

  11. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  12. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  13. Voigt profile introduces optical depth dependent systematic errors - Detected in high resolution laboratory spectra of water

    NASA Astrophysics Data System (ADS)

    Birk, Manfred; Wagner, Georg

    2016-02-01

    The Voigt profile commonly used in radiative transfer modeling of Earth's and planets' atmospheres for remote sensing/climate modeling produces systematic errors so far not accounted for. Saturated lines are systematically too narrow when calculated from pressure broadening parameters based on the analysis of laboratory data with the Voigt profile. This is caused by line narrowing effects leading to systematically too small fitted broadening parameters when applying the Voigt profile. These effective values are still valid to model non-saturated lines with sufficient accuracy. Saturated lines dominated by the wings of the line profile are sufficiently accurately modeled with a Voigt profile with the correct broadening parameters and are thus systematically too narrow when calculated with the effective values. The systematic error was quantified by mid infrared laboratory spectroscopy of the water ν2 fundamental. Correct Voigt profile based pressure broadening parameters for saturated lines were 3-4% larger than the effective ones in the spectroscopic database. Impacts on remote sensing and climate modeling are expected. Combination of saturated and non-saturated lines in the spectroscopic analysis will quantify line narrowing with unprecedented precision.

  14. DtaRefinery: a software tool for elimination of systematic errors from parent ion mass measurements in tandem mass spectra datasets

    SciTech Connect

    Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.

    2009-12-16

    Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.

  15. Evaluating concentration estimation errors in ELISA microarray experiments

    SciTech Connect

    Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.

    2005-01-26

    Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.

  16. On the Gas Optimization and Systematic Error for the Gas Pixel Detector

    NASA Astrophysics Data System (ADS)

    Feng, Hua; Costa, Enrico; Muleri, Fabio; Bellazzini, Ronaldo; Soffitta, Paolo; Zhang, Heng; Li, Hong

    2016-07-01

    The gas pixel detector (GPD) is selected as the focal plane polarimeter for the X-ray Imaging Polarimetry Explorer (XIPE). We calculated the detection efficiency of different gas mixtures, simulated the electron tracks and degree of modulation at different X-ray energies using packages like Geant4/Maxwell/Garfield. The simulated results are tested to be consistent with measurements. We will demonstrate how the choice of gas mixture influences the sensitivity in polarization. We will also show test results of the systematic error, which is the response of detector to unpolarized signals and determines the limiting sensitivity. Our measurements indicate that systematic error is well below 1% in degree of polarization for GPD.

  17. Treatment of systematic errors in the processing of wide angle sonar sensor data for robotic navigation

    SciTech Connect

    Beckerman, M.; Oblow, E.M.

    1988-04-01

    A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.

  18. A constant altitude flight survey method for mapping atmospheric ambient pressures and systematic radar errors

    NASA Technical Reports Server (NTRS)

    Larson, T. J.; Ehernberger, L. J.

    1985-01-01

    The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.

  19. Effects of systematic errors on the mixing ratios of trace gases obtained from occulation spectra

    NASA Technical Reports Server (NTRS)

    Shaffer, W. A.; Shaw, J. H.; Farmer, C. B.

    1983-01-01

    The influence of systematic errors in the parameters of the models describing the geometry and the atmosphere on the profiles of trace gases retrieved from simulated solar occultation spectra, collected at satellite altitudes, is investigated. Because of smearing effects and other uncertainties, it may be preferable to calibrate the spectra internally by measuring absorption lines of an atmospheric gas such as CO2 whose vertical distribution is assumed rather than to relay on externally supplied information.

  20. Estimating Equating Error in Observed-Score Equating. Research Report.

    ERIC Educational Resources Information Center

    van der Linden, Wim J.

    Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and in the population of examinees. This definition underlies, for example, the well-known approximation to the standard error of equating by Lord (1982).…

  1. Derivation and Application of a Global Albedo yielding an Optical Brightness To Physical Size Transformation Free of Systematic Errors

    NASA Technical Reports Server (NTRS)

    Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.

    2007-01-01

    Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross

  2. Parameter Estimation In Ensemble Data Assimilation To Characterize Model Errors In Surface-Layer Schemes Over Complex Terrain

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua; Lee, Jared; Lei, Lili

    2014-05-01

    fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.

  3. Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models

    NASA Astrophysics Data System (ADS)

    Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin

    2014-05-01

    Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more

  4. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  5. Impact of the Born approximation on the estimation error in 2D inverse scattering

    NASA Astrophysics Data System (ADS)

    Diong, M. L.; Roueff, A.; Lasaygues, P.; Litman, A.

    2016-06-01

    The aim is to quantify the impact of the Born approximation on the estimation error for a simple inverse scattering problem, while taking into account the noise measurement features. The proposed method consists of comparing two estimation errors: the error obtained with the Born approximation and the error obtained without it. The first error is characterized by the mean and variance of the maximum likelihood estimator, which are straightforward to compute with the Born approximation because the corresponding estimator is linear. The second error is evaluated with the Cramer–Rao bound (CRB). The CRB is a lower bound on the variance of unbiased estimators and thus does not depend on the choice of the estimation method. Beyond the conclusions that will be established under the Born approximation, this study lays out a general methodology that can be generalized to any other approximation.

  6. mBEEF: An accurate semi-local Bayesian error estimation density functional

    NASA Astrophysics Data System (ADS)

    Wellendorff, Jess; Lundgaard, Keld T.; Jacobsen, Karsten W.; Bligaard, Thomas

    2014-04-01

    We present a general-purpose meta-generalized gradient approximation (MGGA) exchange-correlation functional generated within the Bayesian error estimation functional framework [J. Wellendorff, K. T. Lundgaard, A. Møgelhøj, V. Petzold, D. D. Landis, J. K. Nørskov, T. Bligaard, and K. W. Jacobsen, Phys. Rev. B 85, 235149 (2012)]. The functional is designed to give reasonably accurate density functional theory (DFT) predictions of a broad range of properties in materials physics and chemistry, while exhibiting a high degree of transferability. Particularly, it improves upon solid cohesive energies and lattice constants over the BEEF-vdW functional without compromising high performance on adsorption and reaction energies. We thus expect it to be particularly well-suited for studies in surface science and catalysis. An ensemble of functionals for error estimation in DFT is an intrinsic feature of exchange-correlation models designed this way, and we show how the Bayesian ensemble may provide a systematic analysis of the reliability of DFT based simulations.

  7. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point

  8. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    SciTech Connect

    Ju, Lili; Tian, Li; Wang, Desheng

    2009-01-01

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  9. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199 Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Heckel, Blayne; Lindahl, Eric

    2016-05-01

    This talk provides a discussion of the systematic errors that were encountered in the 199 Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199 Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  10. Systematic errors in the measurement of the permanent electric dipole moment (EDM) of the 199Hg atom

    NASA Astrophysics Data System (ADS)

    Chen, Yi; Graner, Brent; Lindahl, Eric; Heckel, Blayne

    2016-03-01

    This talk provides a discussion of the systematic errors that were encountered in the 199Hg experiment described earlier in this session. The dominant systematic error, unseen in previous 199Hg EDM experiments, arose from small motions of the Hg vapor cells due to forces exerted by the applied electric field. Methods used to understand this effect, as well as the anticipated sources of systematic errors such as leakage currents, parameter correlations, and E2 and v × E / c effects, will be presented. The total systematic error was found to be 72% as large as the statistical error of the EDM measurement. This work was supported by NSF Grant 1306743 and by DOE Grant DE-FG02-97ER41020.

  11. Aerial measurement error with a dot planimeter: Some experimental estimates

    NASA Technical Reports Server (NTRS)

    Yuill, R. S.

    1971-01-01

    A shape analysis is presented which utilizes a computer to simulate a multiplicity of dot grids mathematically. Results indicate that the number of dots placed over an area to be measured provides the entire correlation with accuracy of measurement, the indices of shape being of little significance. Equations and graphs are provided from which the average expected error, and the maximum range of error, for various numbers of dot points can be read.

  12. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  13. Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

    NASA Astrophysics Data System (ADS)

    Bonin, J. A.; Chambers, D. P.

    2012-12-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

  14. Random and systematic measurement errors in acoustic impedance as determined by the transmission line method

    NASA Technical Reports Server (NTRS)

    Parrott, T. L.; Smith, C. D.

    1977-01-01

    The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.

  15. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-10

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  16. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  17. Adjustment of measurements with multiplicative errors: error analysis, estimates of the variance of unit weight, and effect on volume estimation from LiDAR-type digital elevation models.

    PubMed

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2013-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  18. Sensitivity of LIDAR Canopy Height Estimate to Geolocation Error

    NASA Astrophysics Data System (ADS)

    Tang, H.; Dubayah, R.

    2010-12-01

    Many factors affect the quality of canopy height structure data derived from space-based lidar such as DESDynI. Among these is geolocation accuracy. Inadequate geolocation information hinders subsequent analyses because a different portion of the canopy is observed relative to what is assumed. This is especially true in mountainous terrain where the effects of slope magnify geolocation errors. Mission engineering design must trade the expense of providing more accurate geolocation with the potential improvement in measurement accuracy. The objective of our work is to assess the effects of small errors in geolocation on subsequent retrievals of maximum canopy height for a varying set of canopy structures and terrains. Dense discrete lidar data from different forest sites (from La Selva Biological Station, Costa Rica, Sierra National Forest, California, and Hubbard Brook and Bartlett Experimental Forests in New Hampshire) are used to simulate DESDynI height retrievals using various geolocation accuracies. Results show that canopy height measurement errors generally increase as the geolocation error increases. Interestingly, most of the height errors are caused by variation of canopy height rather than topography (slope and aspect).

  19. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    NASA Astrophysics Data System (ADS)

    Gordon, J. J.; Siebers, J. V.

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] < 0.2, where γ = Σ/σ. They were found to be inaccurate for σ[1 - γN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ gap σP, where σP = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σP takes values other than 0.32 cm.) When σ Lt σP, dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ gap σP, consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of

  20. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors.

    PubMed

    Gordon, J J; Siebers, J V

    2007-04-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Sigma and sigma. For clinically relevant combinations of sigma, Sigma and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: sigma[1 - gammaN/25] < 0.2, where gamma = Sigma/sigma. They were found to be inaccurate for sigma[1 - gammaN/25] > 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when sigma greater than or approximately egual sigma(P), where sigma(P) = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if sigma(P) takes values other than 0.32 cm.) When sigma < sigma(P), dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Sigma and N. When sigma greater than or approximately egual sigma(P), consistent with the above criteria, it was found that the VHMF can underestimate margins for large sigma, small Sigma and small N. A

  1. An examination of the southern California field test for the systematic accumulation of the optical refraction error in geodetic leveling.

    USGS Publications Warehouse

    Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.

    1983-01-01

    Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors

  2. Fragment-based error estimation in biomolecular modeling

    PubMed Central

    Faver, John C.; Merz, Kenneth M.

    2013-01-01

    Computer simulations are becoming an increasingly more important component of drug discovery. Computational models are now often able to reproduce and sometimes even predict outcomes of experiments. Still, potential energy models such as force fields contain significant amounts of bias and imprecision. We have shown how even small uncertainties in potential energy models can propagate to yield large errors, and have devised some general error-handling protocols for biomolecular modeling with imprecise energy functions. Herein we discuss those protocols within the contexts of protein–ligand binding and protein folding. PMID:23993915

  3. Pedigree error due to extra-pair reproduction substantially biases estimates of inbreeding depression.

    PubMed

    Reid, Jane M; Keller, Lukas F; Marr, Amy B; Nietlisbach, Pirmin; Sardell, Rebecca J; Arcese, Peter

    2014-03-01

    Understanding the evolutionary dynamics of inbreeding and inbreeding depression requires unbiased estimation of inbreeding depression across diverse mating systems. However, studies estimating inbreeding depression often measure inbreeding with error, for example, based on pedigree data derived from observed parental behavior that ignore paternity error stemming from multiple mating. Such paternity error causes error in estimated coefficients of inbreeding (f) and reproductive success and could bias estimates of inbreeding depression. We used complete "apparent" pedigree data compiled from observed parental behavior and analogous "actual" pedigree data comprising genetic parentage to quantify effects of paternity error stemming from extra-pair reproduction on estimates of f, reproductive success, and inbreeding depression in free-living song sparrows (Melospiza melodia). Paternity error caused widespread error in estimates of f and male reproductive success, causing inbreeding depression in male and female annual and lifetime reproductive success and juvenile male survival to be substantially underestimated. Conversely, inbreeding depression in adult male survival tended to be overestimated when paternity error was ignored. Pedigree error stemming from extra-pair reproduction therefore caused substantial and divergent bias in estimates of inbreeding depression that could bias tests of evolutionary theories regarding inbreeding and inbreeding depression and their links to variation in mating system.

  4. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    NASA Astrophysics Data System (ADS)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  5. Multivariate Error Covariance Estimates by Monte-Carlo Simulation for Assimilation Studies in the Pacific Ocean

    NASA Technical Reports Server (NTRS)

    Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.

    2004-01-01

    One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the

  6. Systematic errors in the measurement of emissivity caused by directional effects.

    PubMed

    Kribus, Abraham; Vishnevetsky, Irna; Rotenberg, Eyal; Yakir, Dan

    2003-04-01

    Accurate knowledge of surface emissivity is essential for applications in remote sensing (remote temperature measurement), radiative transport, and modeling of environmental energy balances. Direct measurements of surface emissivity are difficult when there is considerable background radiation at the same wavelength as the emitted radiation. This occurs, for example, when objects at temperatures near room temperature are measured in a terrestrial environment by use ofthe infrared 8-14-microm band.This problem is usually treated by assumption of a perfectly diffuse surface or of diffuse background radiation. However, real surfaces and actual background radiation are not diffuse; therefore there will be a systematic measurement error. It is demonstrated that, in some cases, the deviations from a diffuse behavior lead to large errors in the measured emissivity. Past measurements made with simplifying assumptions should therefore be reevaluated and corrected. Recommendations are presented for improving experimental procedures in emissivity measurement.

  7. Pressure Measurements Using an Airborne Differential Absorption Lidar. Part 1; Analysis of the Systematic Error Sources

    NASA Technical Reports Server (NTRS)

    Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.

    1999-01-01

    Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.

  8. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  9. Triple collocation: beyond three estimates and separation of structural/non-structural errors

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the “multiple” collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is s...

  10. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  11. Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers

    NASA Technical Reports Server (NTRS)

    Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.

    2012-01-01

    Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.

  12. Observing transiting exoplanets: Removing systematic errors to constrain atmospheric chemistry and dynamics

    NASA Astrophysics Data System (ADS)

    Zellem, Robert Thomas

    2015-03-01

    The > 1500 confirmed exoplanets span a wide range of planetary masses ( 1 MEarth -20 MJupiter), radii ( 0.3 R Earth -2 RJupiter), semi-major axes ( 0.005-100 AU), orbital periods ( 0.3-1 x 105 days), and host star spectral types. The effects of a widely-varying parameter space on a planetary atmosphere's chemistry and dynamics can be determined through transiting exoplanet observations. An exoplanet's atmospheric signal, either in absorption or emission, is on the order of 0.1% which is dwarfed by telescope-specific systematic error sources up to 60%. This thesis explores some of the major sources of error and their removal from space- and ground-based observations, specifically Spitzer /IRAC single-object photometry, IRTF/SpeX and Palomar/TripleSpec low-resolution single-slit near-infrared spectroscopy, and Kuiper/Mont4k multi-object photometry. The errors include pointing-induced uncertainties, airmass variations, seeing-induced signal loss, telescope jitter, and system variability. They are treated with detector efficiency pixel-mapping, normalization routines, a principal component analysis, binning with the geometric mean in Fourier-space, characterization by a comparison star, repeatability, and stellar monitoring to get within a few times of the photon noise limit. As a result, these observations provide strong measurements of an exoplanet's dynamical day-to-night heat transport, constrain its CH4 abundance, investigate emission mechanisms, and develop an observing strategy with smaller telescopes. The reduction methods presented here can also be applied to other existing and future platforms to identify and remove systematic errors. Until such sources of uncertainty are characterized with bright systems with large planetary signals for platforms such as the James Webb Space Telescope, for example, one cannot resolve smaller objects with more subtle spectral features, as expected of exo-Earths.

  13. Mean-square-error bounds for reduced-order linear state estimators

    NASA Technical Reports Server (NTRS)

    Baram, Y.; Kalit, G.

    1987-01-01

    The mean-square error of reduced-order linear state estimators for continuous-time linear systems is investigated. Lower and upper bounds on the minimal mean-square error are presented. The bounds are readily computable at each time-point and at steady state from the solutions to the Ricatti and the Liapunov equations. The usefulness of the error bounds for the analysis and design of reduced-order estimators is illustrated by a practical numerical example.

  14. The estimation of parameters in nonlinear, implicit measurement error models with experiment-wide measurements

    SciTech Connect

    Anderson, K.K.

    1994-05-01

    Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.

  15. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  16. A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Garg, Sanjay

    2009-01-01

    A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.

  17. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.

    PubMed

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip

    2015-08-06

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.

  18. Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances

    PubMed Central

    Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip

    2015-01-01

    In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777

  19. Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction

    NASA Technical Reports Server (NTRS)

    Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.

    2013-01-01

    The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moiré pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moiré pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.

  20. Allowing for random errors in radiation dose estimates for the atomic bomb survivor data.

    PubMed

    Pierce, D A; Stram, D O; Vaeth, M

    1990-09-01

    The presence of random errors in the individual radiation dose estimates for the A-bomb survivors causes underestimation of radiation effects in dose-response analyses, and also distorts the shape of dose-response curves. Statistical methods are presented which will adjust for these biases, provided that a valid statistical model for the dose estimation errors is used. Emphasis is on clarifying some rather subtle statistical issues. For most of this development the distinction between radiation dose and exposure is not critical. The proposed methods involve downward adjustment of dose estimates, but this does not imply that the dosimetry system is faulty. Rather, this is a part of the dose-response analysis required to remove biases in the risk estimates. The primary focus of this report is on linear dose-response models, but methods for linear-quadratic models are also considered briefly. Some plausible models for the dose estimation errors are considered, which have typical errors in a range of 30-40% of the true values, and sensitivity analysis of the resulting bias corrections is provided. It is found that for these error models the resulting estimates of excess cancer risk based on linear models are about 6-17% greater than estimates that make no allowance for dose estimation errors. This increase in risk estimates is reduced to about 4-11% if, as has often been done recently, survivors with dose estimates above 4 Gy are eliminated from the analysis.

  1. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  2. A Fortran IV Program for Estimating Parameters through Multiple Matrix Sampling with Standard Errors of Estimate Approximated by the Jackknife.

    ERIC Educational Resources Information Center

    Shoemaker, David M.

    Described and listed herein with concomitant sample input and output is the Fortran IV program which estimates parameters and standard errors of estimate per parameters for parameters estimated through multiple matrix sampling. The specific program is an improved and expanded version of an earlier version. (Author/BJG)

  3. Multiclass Bayes error estimation by a feature space sampling technique

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.

    1979-01-01

    A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.

  4. Estimation of finite population parameters with auxiliary information and response error.

    PubMed

    González, L M; Singer, J M; Stanek, E J

    2014-10-01

    We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error.

  5. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    SciTech Connect

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  6. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method.

  7. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  8. An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors

    NASA Technical Reports Server (NTRS)

    Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg

    2011-01-01

    The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.

  9. DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS

    SciTech Connect

    Giuppone, C. A.; Beauge, C.; Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A.

    2009-07-10

    We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.

  10. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis.

    PubMed

    Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-08-05

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.

  11. Investigating the epidemiology of medication errors and error-related adverse drug events (ADEs) in primary care, ambulatory care and home settings: a systematic review protocol

    PubMed Central

    Assiri, Ghadah Asaad; Grant, Liz; Aljadhey, Hisham; Sheikh, Aziz

    2016-01-01

    Introduction There is a need to better understand the epidemiology of medication errors and error-related adverse events in community care contexts. Methods and analysis We will systematically search the following databases: Cumulative Index to Nursing and Allied Health Literature (CINAHL), EMBASE, Eastern Mediterranean Regional Office of the WHO (EMRO), MEDLINE, PsycINFO and Web of Science. In addition, we will search Google Scholar and contact an international panel of experts to search for unpublished and in progress work. The searches will cover the time period January 1990–December 2015 and will yield data on the incidence or prevalence of and risk factors for medication errors and error-related adverse drug events in adults living in community settings (ie, primary care, ambulatory and home). Study quality will be assessed using the Critical Appraisal Skills Program quality assessment tool for cohort and case–control studies, and cross-sectional studies will be assessed using the Joanna Briggs Institute Critical Appraisal Checklist for Descriptive Studies. Meta-analyses will be undertaken using random-effects modelling using STATA (V.14) statistical software. Ethics and dissemination This protocol will be registered with PROSPERO, an international prospective register of systematic reviews, and the systematic review will be reported in the peer-reviewed literature using Preferred Reporting Items for Systematic Reviews and Meta-Analyses. PMID:27580826

  12. The effect of errors-in-variables on variance component estimation

    NASA Astrophysics Data System (ADS)

    Xu, Peiliang

    2016-08-01

    Although total least squares (TLS) has been widely applied, variance components in an errors-in-variables (EIV) model can be inestimable under certain conditions and unstable in the sense that small random errors can result in very large errors in the estimated variance components. We investigate the effect of the random design matrix on variance component (VC) estimation of MINQUE type by treating the design matrix as if it were errors-free, derive the first-order bias of the VC estimate, and construct bias-corrected VC estimators. As a special case, we obtain a bias-corrected estimate for the variance of unit weight. Although TLS methods are statistically rigorous, they can be computationally too expensive. We directly Taylor-expand the nonlinear weighted LS estimate of parameters up to the second-order approximation in terms of the random errors of the design matrix, derive the bias of the estimate, and use it to construct a bias-corrected weighted LS estimate. Bearing in mind that the random errors of the design matrix will create a bias in the normal matrix of the weighted LS estimate, we propose to calibrate the normal matrix by computing and then removing the bias from the normal matrix. As a result, we can obtain a new parameter estimate, which is called the N-calibrated weighted LS estimate. The simulations have shown that (i) errors-in-variables have a significant effect on VC estimation, if they are large/significant but treated as non-random. The variance components can be incorrectly estimated by more than one order of magnitude, depending on the nature of problems and the sizes of EIV; (ii) the bias-corrected VC estimate can effectively remove the bias of the VC estimate. If the signal-to-noise is small, higher order terms may be necessary. Nevertheless, since we construct the bias-corrected VC estimate by directly removing the estimated bias from the estimate itself, the simulation results have clearly indicated that there is a great risk to obtain

  13. Improved estimates of coordinate error for molecular replacement

    SciTech Connect

    Oeffner, Robert D.; Bunkóczi, Gábor; McCoy, Airlie J.; Read, Randy J.

    2013-11-01

    A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.

  14. Improved estimates of the range of errors on photomasks using measured values of skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Hamaker, Henry Chris

    1995-12-01

    Statistical process control (SPC) techniques often use six times the standard deviation sigma to estimate the range of errors within a process. Two assumptions are inherent in this choice of metric for the range: (1) the normal distribution adequately describes the errors, and (2) the fraction of errors falling within plus or minus 3 sigma, about 99.73%, is sufficiently large that we may consider the fraction occurring outside this range to be negligible. In state-of-the-art photomasks, however, the assumption of normality frequently breaks down, and consequently plus or minus 3 sigma is not a good estimate of the range of errors. In this study, we show that improved estimates for the effective maximum error Em, which is defined as the value for which 99.73% of all errors fall within plus or minus Em of the mean mu, may be obtained by quantifying the deviation from normality of the error distributions using the skewness and kurtosis of the error sampling. Data are presented indicating that in laser reticle- writing tools, Em less than or equal to 3 sigma. We also extend this technique for estimating the range of errors to specifications that are usually described by mu plus 3 sigma. The implications for SPC are examined.

  15. Comparison of the sensitivity to systematic errors between nonadiabatic non-Abelian geometric gates and their dynamical counterparts

    NASA Astrophysics Data System (ADS)

    Zheng, Shi-Biao; Yang, Chui-Ping; Nori, Franco

    2016-03-01

    We investigate the effects of systematic errors of the control parameters on single-qubit gates based on nonadiabatic non-Abelian geometric holonomies and those relying on purely dynamical evolution. It is explicitly shown that the systematic error in the Rabi frequency of the control fields affects these two kinds of gates in different ways. In the presence of this systematic error, the transformation produced by the nonadiabatic non-Abelian geometric gate is not unitary in the computational space, and the resulting gate infidelity is larger than that with the dynamical method. Our results provide a theoretical basis for choosing a suitable method for implementing elementary quantum gates in physical systems, where the systematic noises are the dominant noise source.

  16. Error estimations and their biases in Monte Carlo eigenvalue calculations

    SciTech Connect

    Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki

    1997-01-01

    In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.

  17. Gap filling strategies and error in estimating annual soil respiration

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

  18. EIA Corrects Errors in Its Drilling Activity Estimates Series

    EIA Publications

    1998-01-01

    The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.

  19. A posteriori error estimates for the Johnson–Nédélec FEM–BEM coupling

    PubMed Central

    Aurada, M.; Feischl, M.; Karkulik, M.; Praetorius, D.

    2012-01-01

    Only very recently, Sayas [The validity of Johnson–Nédélec's BEM-FEM coupling on polygonal interfaces. SIAM J Numer Anal 2009;47:3451–63] proved that the Johnson–Nédélec one-equation approach from [On the coupling of boundary integral and finite element methods. Math Comput 1980;35:1063–79] provides a stable coupling of finite element method (FEM) and boundary element method (BEM). In our work, we now adapt the analytical results for different a posteriori error estimates developed for the symmetric FEM–BEM coupling to the Johnson–Nédélec coupling. More precisely, we analyze the weighted-residual error estimator, the two-level error estimator, and different versions of (h−h/2)-based error estimators. In numerical experiments, we use these estimators to steer h-adaptive algorithms, and compare the effectivity of the different approaches. PMID:22347772

  20. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    PubMed

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  1. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Technical Reports Server (NTRS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  2. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    NASA Astrophysics Data System (ADS)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-11-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  3. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  4. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including

  5. A Posteriori Error Estimation for Finite Volume and Finite Element Approximations Using Broken Space Approximation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Larson, Mats G.

    2000-01-01

    We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.

  6. The estimation error covariance matrix for the ideal state reconstructor with measurement noise

    NASA Technical Reports Server (NTRS)

    Polites, Michael E.

    1988-01-01

    A general expression is derived for the state estimation error covariance matrix for the Ideal State Reconstructor when the input measurements are corrupted by measurement noise. An example is presented which shows that the more measurements used in estimating the state at a given time, the better the estimator.

  7. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  8. The Cosine Error: A Bayesian Procedure for Treating a Non-repetitive Systematic Effect

    NASA Astrophysics Data System (ADS)

    Lira, Ignacio; Grientschnig, Dieter

    2016-08-01

    An inconsistency with respect to variable transformations in our previous treatment of the cosine error example with repositioning (Metrologia, vol. 47, pp. R1-R14) is pointed out. The problem refers to the measurement of the vertical height of a column of liquid in a manometer. A systematic effect arises because of the possible deviation of the measurement axis from the vertical, which may be different each time the measurement is taken. A revised procedure for treating this problem is proposed; it consists in straightforward application of Bayesian statistics using a conditional reference prior with partial information. In most practical applications, the numerical differences between the two procedures will be negligible, so the interest of the revised one is mainly of conceptual nature. Nevertheless, similar measurement models may appear in other contexts, for example, in intercomparisons, so the present investigation may serve as a warning to analysts against applying the same methodology we used in our original approach to the present problem.

  9. On the systematic errors of cosmological-scale gravity tests using redshift-space distortion: non-linear effects and the halo bias

    NASA Astrophysics Data System (ADS)

    Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari

    2014-10-01

    Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, fσ8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M⊙. We find that the systematic error of fσ8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.

  10. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    EPA Science Inventory

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...

  11. Errors and parameter estimation in precipitation-runoff modeling 2. Case study.

    USGS Publications Warehouse

    Troutman, B.M.

    1985-01-01

    A case study is presented which illustrates some of the error analysis, sensitivity analysis, and parameter estimation procedures reviewed in the first part of this paper. It is shown that those procedures, most of which come from statistical nonlinear regression theory, are invaluable in interpreting errors in precipitation-runoff modeling and in identifying appropriate calibration strategies. -Author

  12. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    ERIC Educational Resources Information Center

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  13. MODIS Cloud Optical Property Retrieval Uncertainties Derived from Pixel-Level Radiometric Error Estimates

    NASA Technical Reports Server (NTRS)

    Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong

    2011-01-01

    MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.

  14. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    ERIC Educational Resources Information Center

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  15. Logarithmic diagrams in acid-base titrations and estimation of titration errors.

    PubMed

    Wänninen, E

    1980-01-01

    The use of a logarithmic diagram for the estimation of the pH-value at the equivalence point and the titration error when a solution containing one or two acids is titrated with standard alkali is described.

  16. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  17. The use of neural networks in identifying error sources in satellite-derived tropical SST estimates.

    PubMed

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%.

  18. Error covariance calculation for forecast bias estimation in hydrologic data assimilation

    NASA Astrophysics Data System (ADS)

    Pauwels, Valentijn R. N.; De Lannoy, Gabriëlle J. M.

    2015-12-01

    To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.

  19. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve.

    PubMed

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments.

  20. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    SciTech Connect

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  1. Strategies for Assessing Diffusion Anisotropy on the Basis of Magnetic Resonance Images: Comparison of Systematic Errors

    PubMed Central

    Boujraf, Saïd

    2014-01-01

    Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372

  2. X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors

    SciTech Connect

    Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.

    2010-07-09

    Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.

  3. Refractive Errors and Concomitant Strabismus: A Systematic Review and Meta-analysis

    PubMed Central

    Tang, Shu Min; Chan, Rachel Y. T.; Bin Lin, Shi; Rong, Shi Song; Lau, Henry H. W.; Lau, Winnie W. Y.; Yip, Wilson W. K.; Chen, Li Jia; Ko, Simon T. C.; Yam, Jason C. S.

    2016-01-01

    This systematic review and meta-analysis is to evaluate the risk of development of concomitant strabismus due to refractive errors. Eligible studies published from 1946 to April 1, 2016 were identified from MEDLINE and EMBASE that evaluated any kinds of refractive errors (myopia, hyperopia, astigmatism and anisometropia) as an independent factor for concomitant exotropia and concomitant esotropia. Totally 5065 published records were retrieved for screening, 157 of them eligible for detailed evaluation. Finally 7 population-based studies involving 23,541 study subjects met our criteria for meta-analysis. The combined OR showed that myopia was a risk factor for exotropia (OR: 5.23, P = 0.0001). We found hyperopia had a dose-related effect for esotropia (OR for a spherical equivalent [SE] of 2–3 diopters [D]: 10.16, P = 0.01; OR for an SE of 3-4D: 17.83, P < 0.0001; OR for an SE of 4-5D: 41.01, P < 0.0001; OR for an SE of ≥5D: 162.68, P < 0.0001). Sensitivity analysis indicated our results were robust. Results of this study confirmed myopia as a risk for concomitant exotropia and identified a dose-related effect for hyperopia as a risk of concomitant esotropia. PMID:27731389

  4. Systematic and Statistical Errors Associated with Nuclear Decay Constant Measurements Using the Counting Technique

    NASA Astrophysics Data System (ADS)

    Koltick, David; Wang, Haoyu; Liu, Shih-Chieh; Heim, Jordan; Nistor, Jonathan

    2016-03-01

    Typical nuclear decay constants are measured at the accuracy level of 10-2. There are numerous reasons: tests of unconventional theories, dating of materials, and long term inventory evolution which require decay constants accuracy at a level of 10-4 to 10-5. The statistical and systematic errors associated with precision measurements of decays using the counting technique are presented. Precision requires high count rates, which introduces time dependent dead time and pile-up corrections. An approach to overcome these issues is presented by continuous recording of the detector current. Other systematic corrections include, the time dependent dead time due to background radiation, control of target motion and radiation flight path variation due to environmental conditions, and the time dependent effects caused by scattered events are presented. The incorporation of blind experimental techniques can help make measurement independent of past results. A spectrometer design and data analysis is reviewed that can accomplish these goals. The author would like to thank TechSource, Inc. and Advanced Physics Technologies, LLC. for their support in this work.

  5. Impact of instrumental systematic errors on fine-structure constant measurements with quasar spectra

    NASA Astrophysics Data System (ADS)

    Whitmore, Jonathan B.; Murphy, Michael T.

    2015-02-01

    We present a new `supercalibration' technique for measuring systematic distortions in the wavelength scales of high-resolution spectrographs. By comparing spectra of `solar twin' stars or asteroids with a reference laboratory solar spectrum, distortions in the standard thorium-argon calibration can be tracked with ˜10 m s-1 precision over the entire optical wavelength range on scales of both echelle orders (˜50-100 Å) and entire spectrographs arms (˜1000-3000 Å). Using archival spectra from the past 20 yr, we have probed the supercalibration history of the Very Large Telescope-Ultraviolet and Visible Echelle Spectrograph (VLT-UVES) and Keck-High Resolution Echelle Spectrograph (HIRES) spectrographs. We find that systematic errors in their wavelength scales are ubiquitous and substantial, with long-range distortions varying between typically ±200 m s-1 per 1000 Å. We apply a simple model of these distortions to simulated spectra that characterize the large UVES and HIRES quasar samples which previously indicated possible evidence for cosmological variations in the fine-structure constant, α. The spurious deviations in α produced by the model closely match important aspects of the VLT-UVES quasar results at all redshifts and partially explain the HIRES results, though not self-consistently at all redshifts. That is, the apparent ubiquity, size and general characteristics of the distortions are capable of significantly weakening the evidence for variations in α from quasar absorption lines.

  6. Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo

    SciTech Connect

    Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu

    2011-03-01

    Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.

  7. Evaluating IMRT and VMAT dose accuracy: Practical examples of failure to detect systematic errors when applying a commonly used metric and action levels

    SciTech Connect

    Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell

    2013-11-15

    Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were

  8. Reward prediction error signals associated with a modified time estimation task.

    PubMed

    Holroyd, Clay B; Krigolson, Olave E

    2007-11-01

    The feedback error-related negativity (fERN) is a component of the human event-related brain potential (ERP) elicited by feedback stimuli. A recent theory holds that the fERN indexes a reward prediction error signal associated with the adaptive modification of behavior. Here we present behavioral and ERP data recorded from participants engaged in a modified time estimation task. As predicted by the theory, our results indicate that fERN amplitude reflects a reward prediction error signal and that the size of this error signal is correlated across participants with changes in task performance.

  9. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  10. The Shane-Wirtanen counts - Systematics and two-point correlation function. [for astronomical map error analysis

    NASA Technical Reports Server (NTRS)

    De Lapparent, V.; Kurtz, M. J.; Geller, M. J.

    1986-01-01

    Residual errors in the Selder et al. (SSGP) map which caused a break in both the correlation factor (CF) and the filamentary appearance of the Shane-Wirtanen map are examined. These errors, causing a residual rms fluctuation of 11 percent in the SSGP-corrected counts and a systematic rms offset of 8 percent in the mean count per plate, can be attributed to counting pattern and plate vignetting. Techniques for CF reconstruction in catalogs affected by plate-related systematic biases are examined, and it is concluded that accurate restoration may not be possible. Surveys designed to measure the CF at the depth of the SW counts on a scale of 2.5 deg, must have systematic errors of less than or about 0.04 mag.

  11. Solving large tomographic linear systems: size reduction and error estimation

    NASA Astrophysics Data System (ADS)

    Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust

    2014-10-01

    We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.

  12. Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope's Random Error.

    PubMed

    Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia

    2015-01-01

    Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods-quantile, empirical characteristic function (ECF) and logarithmic moment method-are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope's random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope's random error.

  13. Error analysis of empirical ocean tide models estimated from TOPEX/POSEIDON altimetry

    NASA Astrophysics Data System (ADS)

    Desai, Shailen D.; Wahr, John M.; Chao, Yi

    1997-11-01

    An error budget is proposed for the TOPEX/POSEIDON (T/P) empirical ocean tide models estimated during the primary mission. The error budget evaluates the individual contribution of errors in each of the altimetric range corrections, orbit errors caused by errors in the background ocean tide potential, and errors caused by the general circulation of the oceans, to errors in the ocean tide models of the eight principal diurnal and semidiurnal tidal components, and the two principal long-period tidal components. The effect of continually updating the T/P empirical ocean tide models during the primary T/P mission is illustrated through tide gauge comparisons and then used to predict the impact of further updates during the extended mission. Both the tide gauge comparisons and the error analysis predict errors in the tide models for the eight principal diurnal and semidiurnal constituents to be of the order of 2-3 cm root-sum-square. The dominant source of errors in the T/P ocean tide models appears to be caused by the general circulation of the oceans observed by the T/P altimeter. Further updates of the T/P empirical ocean tide models during the extended mission should not provide significant improvements in the diurnal and semidiurnal ocean tide models but should provide significant improvements in the long-period ocean tide models, particularly in the monthly (Mm) tidal component.

  14. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  15. Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28

    ERIC Educational Resources Information Center

    Guo, Hongwen; Sinharay, Sandip

    2011-01-01

    Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…

  16. Improved Margin of Error Estimates for Proportions in Business: An Educational Example

    ERIC Educational Resources Information Center

    Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael

    2015-01-01

    This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and small…

  17. A novel data-driven approach to model error estimation in Data Assimilation

    NASA Astrophysics Data System (ADS)

    Pathiraja, Sahani; Moradkhani, Hamid; Marshall, Lucy; Sharma, Ashish

    2016-04-01

    Error characterisation is a fundamental component of Data Assimilation (DA) studies. Effectively describing model error statistics has been a challenging area, with many traditional methods requiring some level of subjectivity (for instance in defining the error covariance structure). Recent advances have focused on removing the need for tuning of error parameters, although there are still some outstanding issues. Many methods focus only on the first and second moments, and rely on assuming multivariate Gaussian statistics. We propose a non-parametric, data-driven framework to estimate the full distributional form of model error, ie. the transition density p(xt|xt-1). All sources of uncertainty associated with the model simulations are considered, without needing to assign error characteristics/devise stochastic perturbations for individual components of model uncertainty (eg. input, parameter and structural). A training period is used to derive the error distribution of observed variables, conditioned on (potentially hidden) states. Errors in hidden states are estimated from the conditional distribution of observed variables using non-linear optimization. The framework is discussed in detail, and an application to a hydrologic case study with hidden states for one-day ahead streamflow prediction is presented. Results demonstrate improved predictions and more realistic uncertainty bounds compared to a standard tuning approach.

  18. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    NASA Technical Reports Server (NTRS)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  19. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    SciTech Connect

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  20. Interventions to reduce wrong blood in tube errors in transfusion: a systematic review.

    PubMed

    Cottrell, Susan; Watson, Douglas; Eyre, Toby A; Brunskill, Susan J; Dorée, Carolyn; Murphy, Michael F

    2013-10-01

    This systematic review addresses the issue of wrong blood in tube (WBIT). The objective was to identify interventions that have been implemented and the effectiveness of these interventions to reduce WBIT incidence in red blood cell transfusion. Eligible articles were identified through a comprehensive search of The Cochrane Library, MEDLINE, EMBASE, Cinahl, BNID, and the Transfusion Evidence Library to April 2013. Initial search criteria were wide including primary intervention or observational studies, case reports, expert opinion, and guidelines. There was no restriction by study type, language, or status. Publications before 1995, reviews or reports of a secondary nature, studies of sampling errors outwith transfusion, and articles involving animals were excluded. The primary outcome was a reduction in errors. Study characteristics, outcomes measured, and methodological quality were extracted by 2 authors independently. The principal method of analysis was descriptive. A total of 12,703 references were initially identified. Preliminary secondary screening by 2 reviewers reduced articles for detailed screening to 128 articles. Eleven articles were eventually identified as eligible, resulting in 9 independent studies being included in the review. The overall finding was that all the identified interventions reduced WBIT incidence. Five studies measured the effect of a single intervention, for example, changes to blood sample labeling, weekly feedback, handwritten transfusion requests, and an electronic transfusion system. Four studies reported multiple interventions including education, second check of ID at sampling, and confirmatory sampling. It was not clear which intervention was the most effective. Sustainability of the effectiveness of interventions was also unclear. Targeted interventions, either single or multiple, can lead to a reduction in WBIT; but the sustainability of effectiveness is uncertain. Data on the pre- and postimplementation of

  1. An estimate of asthma prevalence in Africa: a systematic analysis

    PubMed Central

    Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry

    2013-01-01

    Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa

  2. Population size estimation in Yellowstone wolves with error-prone noninvasive microsatellite genotypes.

    PubMed

    Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas

    2003-07-01

    Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias. PMID:12803649

  3. Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy

    NASA Astrophysics Data System (ADS)

    Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid

    2015-07-01

    Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.

  4. The shrinking Sun: A systematic error in local correlation tracking of solar granulation

    NASA Astrophysics Data System (ADS)

    Löptien, B.; Birch, A. C.; Duvall, T. L.; Gizon, L.; Schou, J.

    2016-05-01

    Context. Local correlation tracking of granulation (LCT) is an important method for measuring horizontal flows in the photosphere. This method exhibits a systematic error that looks like a flow converging toward disk center, which is also known as the shrinking-Sun effect. Aims: We aim to study the nature of the shrinking-Sun effect for continuum intensity data and to derive a simple model that can explain its origin. Methods: We derived LCT flow maps by running the LCT code Fourier Local Correlation Tracking (FLCT) on tracked and remapped continuum intensity maps provided by the Helioseismic and Magnetic Imager (HMI) onboard the Solar Dynamics Observatory (SDO). We also computed flow maps from synthetic continuum images generated from STAGGER code simulations of solar surface convection. We investigated the origin of the shrinking-Sun effect by generating an average granule from synthetic data from the simulations. Results: The LCT flow maps derived from the HMI data and the simulations exhibit a shrinking-Sun effect of comparable magnitude. The origin of this effect is related to the apparent asymmetry of granulation originating from radiative transfer effects when observing with a viewing angle inclined from vertical. This causes, in combination with the expansion of the granules, an apparent motion toward disk center.

  5. Analysis of systematic errors of the ASM/RXTE monitor and GT-48 γ-ray telescope

    NASA Astrophysics Data System (ADS)

    Fidelis, V. V.

    2011-06-01

    The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 γ-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg γ-2, according to the Crimean version) were used and the stationary nature of its γ-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 γ-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.

  6. UNDERSTANDING SYSTEMATIC MEASUREMENT ERROR IN THERMAL-OPTICAL ANALYSIS FOR PM BLACK CARBON USING RESPONSE SURFACES AND SURFACE CONFIDENCE INTERVALS

    EPA Science Inventory

    Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...

  7. Estimating error cross-correlations in soil moisture data sets using extended collocation analysis

    NASA Astrophysics Data System (ADS)

    Gruber, A.; Su, C.-H.; Crow, W. T.; Zwieback, S.; Dorigo, W. A.; Wagner, W.

    2016-02-01

    Global soil moisture records are essential for studying the role of hydrologic processes within the larger earth system. Various studies have shown the benefit of assimilating satellite-based soil moisture data into water balance models or merging multisource soil moisture retrievals into a unified data set. However, this requires an appropriate parameterization of the error structures of the underlying data sets. While triple collocation (TC) analysis has been widely recognized as a powerful tool for estimating random error variances of coarse-resolution soil moisture data sets, the estimation of error cross covariances remains an unresolved challenge. Here we propose a method—referred to as extended collocation (EC) analysis—for estimating error cross-correlations by generalizing the TC method to an arbitrary number of data sets and relaxing the therein made assumption of zero error cross-correlation for certain data set combinations. A synthetic experiment shows that EC analysis is able to reliably recover true error cross-correlation levels. Applied to real soil moisture retrievals from Advanced Microwave Scanning Radiometer-EOS (AMSR-E) C-band and X-band observations together with advanced scatterometer (ASCAT) retrievals, modeled data from Global Land Data Assimilation System (GLDAS)-Noah and in situ measurements drawn from the International Soil Moisture Network, EC yields reasonable and strong nonzero error cross-correlations between the two AMSR-E products. Against expectation, nonzero error cross-correlations are also found between ASCAT and AMSR-E. We conclude that the proposed EC method represents an important step toward a fully parameterized error covariance matrix for coarse-resolution soil moisture data sets, which is vital for any rigorous data assimilation framework or data merging scheme.

  8. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.

    PubMed

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended.

  9. Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates

    PubMed Central

    Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma

    2015-01-01

    Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023

  10. Estimation of 3D reconstruction errors in a stereo-vision system

    NASA Astrophysics Data System (ADS)

    Belhaoua, A.; Kohler, S.; Hirsch, E.

    2009-06-01

    The paper presents an approach for error estimation for the various steps of an automated 3D vision-based reconstruction procedure of manufactured workpieces. The process is based on a priori planning of the task and built around a cognitive intelligent sensory system using so-called Situation Graph Trees (SGT) as a planning tool. Such an automated quality control system requires the coordination of a set of complex processes performing sequentially data acquisition, its quantitative evaluation and the comparison with a reference model (e.g., CAD object model) in order to evaluate quantitatively the object. To ensure efficient quality control, the aim is to be able to state if reconstruction results fulfill tolerance rules or not. Thus, the goal is to evaluate independently the error for each step of the stereo-vision based 3D reconstruction (e.g., for calibration, contour segmentation, matching and reconstruction) and then to estimate the error for the whole system. In this contribution, we analyze particularly the segmentation error due to localization errors for extracted edge points supposed to belong to lines and curves composing the outline of the workpiece under evaluation. The fitting parameters describing these geometric features are used as quality measure to determine confidence intervals and finally to estimate the segmentation errors. These errors are then propagated through the whole reconstruction procedure, enabling to evaluate their effect on the final 3D reconstruction result, specifically on position uncertainties. Lastly, analysis of these error estimates enables to evaluate the quality of the 3D reconstruction, as illustrated by the shown experimental results.

  11. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice.

  12. An online model correction method based on an inverse problem: Part II—systematic model error correction

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-11-01

    An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.

  13. An a posteriori error estimator for shape optimization: application to EIT

    NASA Astrophysics Data System (ADS)

    Giacomini, M.; Pantz, O.; Trabelsi, K.

    2015-11-01

    In this paper we account for the numerical error introduced by the Finite Element approximation of the shape gradient to construct a guaranteed shape optimization method. We present a goal-oriented strategy inspired by the complementary energy principle to construct a constant-free, fully-computable a posteriori error estimator and to derive a certified upper bound of the error in the shape gradient. The resulting Adaptive Boundary Variation Algorithm (ABVA) is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion for the optimization loop. Some preliminary numerical results for the inverse identification problem of Electrical Impedance Tomography are presented.

  14. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information.

    PubMed

    Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S

    2016-02-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  15. Uncertainty quantification for radiation measurements: Bottom-up error variance estimation using calibration information.

    PubMed

    Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S

    2016-02-01

    One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors

  16. Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries

    NASA Technical Reports Server (NTRS)

    Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)

    2002-01-01

    This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.

  17. Impacts of Characteristics of Errors in Radar Rainfall Estimates for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    KO, D.; PARK, T.; Lee, T. S.; Shin, J. Y.; Lee, D.

    2015-12-01

    For flood prediction, weather radar has been commonly employed to measure the amount of precipitation and its spatial distribution. However, estimated rainfall from the radar contains uncertainty caused by its errors such as beam blockage and ground clutter. Even though, previous studies have been focused on removing error of radar data, it is crucial to evaluate runoff volumes which are influenced primarily by the radar errors. Furthermore, resolution of rainfall modeled by previous studies for rainfall uncertainty analysis or distributed hydrological simulation are quite coarse to apply to real application. Therefore, in the current study, we tested the effects of radar rainfall errors on rainfall runoff with a high resolution approach, called spatial error model (SEM). In the current study, the synthetic generation of random and cross-correlated radar errors were employed as SEM. A number of events for the Nam River dam region were tested to investigate the peak discharge from a basin according to error variance. The results indicate that the dependent error brings much higher variations in peak discharge than the independent random error. To further investigate the effect of the magnitude of cross-correlation between radar errors, the different magnitudes of spatial cross-correlations were employed for the rainfall-runoff simulation. The results demonstrate that the stronger correlation leads to higher variation of peak discharge and vice versa. We conclude that the error structure in radar rainfall estimates significantly affects on predicting the runoff peak. Therefore, the efforts must take into consideration on not only removing radar rainfall error itself but also weakening the cross-correlation structure of radar errors in order to forecast flood events more accurately. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which

  18. The Thirty Gigahertz Instrument Receiver for the QUIJOTE Experiment: Preliminary Polarization Measurements and Systematic-Error Analysis

    PubMed Central

    Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo

    2015-01-01

    This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906

  19. Computation of the factorized error covariance of the difference between correlated estimators

    NASA Technical Reports Server (NTRS)

    Wolff, Peter J.; Mohan, Srinivas N.; Stienon, Francis M.; Bierman, Gerald J.

    1990-01-01

    A state estimation problem where some of the measurements may be common to two or more data sets is considered. Two approaches for computing the error covariance of the difference between filtered estimates (for each data set) are discussed. The first algorithm is based on postprocessing of the Kalman gain profiles of two correlated estimators. It uses UD factors of the covariance of the relative error. The second algorithm uses a square root information filter applied to relative error analysis. In the absence of process noise, the square root information filter is computationally more efficient and more flexible than the Kalman gain (covariance update) method. Both the algorithms (covariance and information matrix based) are applied to a Venus orbiter simulation, and their performances are compared.

  20. Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'

    NASA Technical Reports Server (NTRS)

    Errico, Ronald M.; Prive, Nikki C.; Gu, Wei

    2014-01-01

    The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.

  1. SU-E-T-550: Range Effects in Proton Therapy Caused by Systematic Errors in the Stoichiometric Calibration

    SciTech Connect

    Doolan, P; Dias, M; Collins Fekete, C; Seco, J

    2014-06-01

    Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)

  2. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    NASA Astrophysics Data System (ADS)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D’Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; The DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  3. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    NASA Astrophysics Data System (ADS)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%-2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  4. Knowledge of results for motor learning: relationship between error estimation and knowledge of results frequency.

    PubMed

    Guadagnoli, M A; Kohl, R M

    2001-06-01

    The authors of the present study investigated the apparent contradiction between early and more recent views of knowledge of results (KR), the idea that how one is engaged before receiving KR may not be independent of how one uses that KR. In a 2 ×: 2 factorial design, participants (N = 64) practiced a simple force-production task and (a) were required, or not required, to estimate error about their previous response and (b) were provided KR either after every response (100%) or after every 5th response (20%) during acquisition. A no-KR retention test revealed an interaction between acquisition error estimation and KR frequencies. The group that received 100% KR and was required to error estimate during acquisition performed the best during retention. The 2 groups that received 20% KR performed less well. Finally, the group that received 100% KR and was not required to error estimate during acquisition performed the poorest during retention. One general interpretation of that pattern of results is that motor learning is an increasing function of the degree to which participants use KR to test response hypotheses (J. A. Adams, 1971; R. A. Schmidt, 1975). Practicing simple responses coupled with error estimation may embody response hypotheses that can be tested with KR, thus benefiting motor learning most under a 100% KR condition. Practicing simple responses without error estimation is less likely to embody response hypothesis, however, which may increase the probability that participants will use KR to guide upcoming responses, thus attenuating motor learning under a 100% KR condition. The authors conclude, therefore, that how one is engaged before receiving KR may not be independent of how one uses KR. PMID:11404216

  5. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error.

    PubMed

    Stenroos, Matti; Hauk, Olaf

    2013-11-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only.

  6. Minimizing systematic errors from atmospheric multiple scattering and satellite viewing geometry in coastal zone color scanner level IIA imagery

    NASA Technical Reports Server (NTRS)

    Martin, D. L.; Perry, M. J.

    1994-01-01

    Water-leaving radiances and phytoplankton pigment concentrations are calculated from coastal zone color scanner (CZCS) radiance measurements by removing atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. The single greatest source of error in CZCS atmospheric correction algorithms in the assumption that these Rayleigh and aerosol radiances are separable. Multiple-scattering interactions between Rayleigh and aerosol components cause systematic errors in calculated aerosol radiances, and the magnitude of these errors is dependent on aerosol type and optical depth and on satellite viewing geometry. A technique was developed which extends the results of previous radiative transfer modeling by Gordon and Castano to predict the magnitude of these systematic errors for simulated CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere. The simulated image mathematically duplicates the exact satellite, Sun, and pixel locations of an actual CZCS image. Errors in the aerosol radiance at 443 nm are calculated for a range of aerosol optical depths. When pixels in the simulated image exceed an error threshhold, the corresponding pixels in the actual CZCS image are flagged and excluded from further analysis or from use in image compositing or compilation of pigment concentration databases. Studies based on time series analyses or compositing of CZCS imagery which do not address Rayleigh-aerosol multiple scattering should be interpreted cautiously, since the fundamental assumption used in their atmospheric correction algorithm is flawed.

  7. ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆

    PubMed Central

    Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk

    2014-01-01

    In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725

  8. Least squares support vector machines for direction of arrival estimation with error control and validation.

    SciTech Connect

    Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew

    2003-02-01

    The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.

  9. Estimation of Separation Buffers for Wind-Prediction Error in an Airborne Separation Assistance System

    NASA Technical Reports Server (NTRS)

    Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette

    2009-01-01

    Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.

  10. Toward a Framework for Systematic Error Modeling of NASA Spaceborne Radar with NOAA/NSSL Ground Radar-Based National Mosaic QPE

    NASA Technical Reports Server (NTRS)

    Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.

    2011-01-01

    Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.

  11. Modeling the Error of Estimated River Discharge in the Eurasian Pan-Arctic

    NASA Astrophysics Data System (ADS)

    Shiklomanov, A.; Lammers, R.; Yakovleva, T.; Vorosmarty, C.

    2004-05-01

    Recent work by Peterson et al. (2002) has shown increases in the river discharge to the Arctic Ocean of the six largest Eurasian Rivers to be 7% (2.0 +/- 0.7 km3/year) from 1936 to 1999. As with most measures of the natural environment this increase represents the trend of a highly variable time series containing annual, seasonal, and daily cycles. Accurate estimates of the error associated with these time and space aggregated annual series have not been well characterized. We seek here to define the expected error surrounding the time series of annual river discharge in the pan-Arctic region. Using standard hydrometric data along with information on the i) frequency and precision of measurements, ii) characteristics of river channel capacity, and iii) method of discharge computation, we develop a model of error for river discharge. We focus on the random error in the daily river discharge of the major down-stream gauges of the pan-Arctic drainage system. A simplified method to define possible errors in the average discharge data has been developed to estimate the reliability of monthly and yearly data over the long-term period of observation (up to 64 years). Results to date have shown that the accuracy of daily discharge estimates for the large Eurasian rivers is highly variable throughout any given year. Maximum errors, exceeding 40% for some rivers take place during ice and backwater conditions when the stage-discharge rating curves cannot be applied. The annual discharge over the long-term is more accurate with 3-10% error. This error range for annual data holds even for more recent periods, such as for the last 15 years, when the actual number of discharge measurements declined significantly. With this error model we have found the estimated error for river discharge in the six largest Eurasian watersheds can be reduced by up to 50% to 2.0 +/- 0.4 km3/year over the previous estimates

  12. Upper bounds on position error of a single location estimate in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Gholami, Mohammad Reza; Ström, Erik G.; Wymeersch, Henk; Gezici, Sinan

    2014-12-01

    This paper studies upper bounds on the position error for a single estimate of an unknown target node position based on distance estimates in wireless sensor networks. In this study, we investigate a number of approaches to confine the target node position to bounded sets for different scenarios. Firstly, if at least one distance estimate error is positive, we derive a simple, but potentially loose upper bound, which is always valid. In addition assuming that the probability density of measurement noise is nonzero for positive values and a sufficiently large number of distance estimates are available, we propose an upper bound, which is valid with high probability. Secondly, if a reasonable lower bound on negative measurement errors is known a priori, we manipulate the distance estimates to obtain a new set with positive measurement errors. In general, we formulate bounds as nonconvex optimization problems. To solve the problems, we employ a relaxation technique and obtain semidefinite programs. We also propose a simple approach to find the bounds in closed forms. Simulation results show reasonable tightness for different bounds in various situations.

  13. Estimation of the minimum mRNA splicing error rate in vertebrates.

    PubMed

    Skandalis, A

    2016-01-01

    The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995

  14. Multiplicative errors in the galaxy power spectrum: self-calibration of unknown photometric systematics for precision cosmology

    NASA Astrophysics Data System (ADS)

    Shafer, Daniel L.; Huterer, Dragan

    2015-03-01

    We develop a general method to `self-calibrate' observations of galaxy clustering with respect to systematics associated with photometric calibration errors. We first point out the danger posed by the multiplicative effect of calibration errors, where large-angle error propagates to small scales and may be significant even if the large-scale information is cleaned or not used in the cosmological analysis. We then propose a method to measure the arbitrary large-scale calibration errors and use these measurements to correct the small-scale (high-multipole) power which is most useful for constraining the majority of cosmological parameters. We demonstrate the effectiveness of our approach on synthetic examples and briefly discuss how it may be applied to real data.

  15. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.

    2014-10-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 σ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 σ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the

  16. Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval

    NASA Technical Reports Server (NTRS)

    Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.

    2011-01-01

    Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We

  17. Estimating genotype error rates from high-coverage next-generation sequence data.

    PubMed

    Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil

    2014-11-01

    Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods.

  18. Error estimates of triangular finite elements under a weak angle condition

    NASA Astrophysics Data System (ADS)

    Mao, Shipeng; Shi, Zhongci

    2009-08-01

    In this note, by analyzing the interpolation operator of Girault and Raviart given in [V. Girault, P.A. Raviart, Finite element methods for Navier-Stokes equations, Theory and algorithms, in: Springer Series in Computational Mathematics, Springer-Verlag, Berlin,1986] over triangular meshes, we prove optimal interpolation error estimates for Lagrange triangular finite elements of arbitrary order under the maximal angle condition in a unified and simple way. The key estimate is only an application of the Bramble-Hilbert lemma.

  19. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    PubMed Central

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-01-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707

  20. A Refined Algorithm On The Estimation Of Residual Motion Errors In Airborne SAR Images

    NASA Astrophysics Data System (ADS)

    Zhong, Xuelian; Xiang, Maosheng; Yue, Huanyin; Guo, Huadong

    2010-10-01

    Due to the lack of accuracy in the navigation system, residual motion errors (RMEs) frequently appear in the airborne SAR image. For very high resolution SAR imaging and repeat-pass SAR interferometry, the residual motion errors must be estimated and compensated. We have proposed a new algorithm before to estimate the residual motion errors for an individual SAR image. It exploits point-like targets distributed along the azimuth direction, and not only corrects the phase, but also improves the azimuth focusing. But the required point targets are selected by hand, which is time- and labor-consuming. In addition, the algorithm is sensitive to noises. In this paper, a refined algorithm is proposed aiming at these two shortcomings. With real X-band airborne SAR data, the feasibility and accuracy of the refined algorithm are demonstrated.

  1. Estimating the anomalous diffusion exponent for single particle tracking data with measurement errors - An alternative approach

    NASA Astrophysics Data System (ADS)

    Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander

    2015-06-01

    Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors.

  2. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, Elena

    2016-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often 2 years, which may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever the function form, such models are generally parameterised by minimising the mean square error, which assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, which penalises the overpredictions more. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the country of Italy. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  3. Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions

    NASA Astrophysics Data System (ADS)

    Toth, E.

    2015-06-01

    In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.

  4. Estimate error of frequency-dependent Q introduced by linear regression and its nonlinear implementation

    NASA Astrophysics Data System (ADS)

    Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing

    2016-02-01

    The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent γ and the ratio of the bandwidth to the central frequency σ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent γ , and less affected by the ratio σ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.

  5. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    NASA Astrophysics Data System (ADS)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  6. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis.

    PubMed

    Jones, Reese E; Mandadapu, Kranthi K

    2012-04-21

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  7. Mapping the Origins of Time: Scalar Errors in Infant Time Estimation

    ERIC Educational Resources Information Center

    Addyman, Caspar; Rocha, Sinead; Mareschal, Denis

    2014-01-01

    Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…

  8. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  9. Application of a posteriori error estimates for the steady Stokes-Brinkman equation in 2D

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Burda, Pavel

    2016-06-01

    The paper deals with the Stokes-Brinkman equation. We investigate a posteriori error estimates for the Stokes-Brinkman equation on two-dimensional polygonal domains. Special attention is paid to the value of the hydraulic conductivity coefficients. We present numerical results for an incompressible flow problem in a domain with corners.

  10. A Derivation of the Unbiased Standard Error of Estimate: The General Case.

    ERIC Educational Resources Information Center

    O'Brien, Francis J., Jr.

    This paper is part of a series of applied statistics monographs intended to provide supplementary reading for applied statistics students. In the present paper, derivations of the unbiased standard error of estimate for both the raw score and standard score linear models are presented. The derivations for raw score linear models are presented in…

  11. Approximation and error estimation in high dimensional space for stochastic collocation methods on arbitrary sparse samples

    SciTech Connect

    Archibald, Richard K; Deiterding, Ralf; Hauck, Cory D; Jakeman, John D; Xiu, Dongbin

    2012-01-01

    We have develop a fast method that can capture piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used for both approximation and error estimation of stochastic simulations where the computations can either be guided or come from a legacy database.

  12. Error Estimation Techniques to Refine Overlapping Aerial Image Mosaic Processes via Detected Parameters

    ERIC Educational Resources Information Center

    Bond, William Glenn

    2012-01-01

    In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…

  13. Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method

    ERIC Educational Resources Information Center

    Liu, Yuming; Schulz, E. Matthew; Yu, Lei

    2008-01-01

    A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…

  14. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    ERIC Educational Resources Information Center

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  15. Estimation of chromatic errors from broadband images for high contrast imaging

    NASA Astrophysics Data System (ADS)

    Sirbu, Dan; Belikov, Ruslan

    2015-09-01

    Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.

  16. Interval Estimation for True Raw and Scale Scores under the Binomial Error Model

    ERIC Educational Resources Information Center

    Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.

    2006-01-01

    Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…

  17. Comparison of Parametric and Nonparametric Bootstrap Methods for Estimating Random Error in Equipercentile Equating

    ERIC Educational Resources Information Center

    Cui, Zhongmin; Kolen, Michael J.

    2008-01-01

    This article considers two methods of estimating standard errors of equipercentile equating: the parametric bootstrap method and the nonparametric bootstrap method. Using a simulation study, these two methods are compared under three sample sizes (300, 1,000, and 3,000), for two test content areas (the Iowa Tests of Basic Skills Maps and Diagrams…

  18. Reduction of systematic errors in regional climate simulations of the summer monsoon over East Asia and the western North Pacific by applying the spectral nudging technique

    NASA Astrophysics Data System (ADS)

    Cha, Dong-Hyun; Lee, Dong-Kyou

    2009-07-01

    In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.

  19. Procedures for dealing with certain types of noise and systematic errors common to many Hadamard transform optical systems

    NASA Technical Reports Server (NTRS)

    Harwit, M.

    1977-01-01

    Sources of noise and error correcting procedures characteristic of Hadamard transform optical systems were investigated. Reduction of spectral noise due to noise spikes in the data, the effect of random errors, the relative performance of Fourier and Hadamard transform spectrometers operated under identical detector-noise-limited conditions, and systematic means for dealing with mask defects are among the topics discussed. The distortion in Hadamard transform optical instruments caused by moving Masks, incorrect mask alignment, missing measurements, and diffraction is analyzed and techniques for reducing or eliminating this distortion are described.

  20. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    DOE PAGES

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; et al

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we concludemore » that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr₋1 in the 1960s to 0.3 Pg C yr₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr₋1 in the 1960s to almost 1.0 Pg C yr₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the

  1. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    SciTech Connect

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-30

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr₋1 in the 1960s to 0.3 Pg C yr₋1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr₋1 in the 1960s to almost 1.0 Pg C yr₋1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half

  2. Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty

    NASA Astrophysics Data System (ADS)

    Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.

    2015-04-01

    Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2σ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2σ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere

  3. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2016-06-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  4. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  5. Estimated Cost Savings from Reducing Errors in the Preparation of Sterile Doses of Medications

    PubMed Central

    Schneider, Philip J.

    2014-01-01

    Abstract Background: Preventing intravenous (IV) preparation errors will improve patient safety and reduce costs by an unknown amount. Objective: To estimate the financial benefit of robotic preparation of sterile medication doses compared to traditional manual preparation techniques. Methods: A probability pathway model based on published rates of errors in the preparation of sterile doses of medications was developed. Literature reports of adverse events were used to project the array of medical outcomes that might result from these errors. These parameters were used as inputs to a customized simulation model that generated a distribution of possible outcomes, their probability, and associated costs. Results: By varying the important parameters across ranges found in published studies, the simulation model produced a range of outcomes for all likely possibilities. Thus it provided a reliable projection of the errors avoided and the cost savings of an automated sterile preparation technology. The average of 1,000 simulations resulted in the prevention of 5,420 medication errors and associated savings of $288,350 per year. The simulation results can be narrowed to specific scenarios by fixing model parameters that are known and allowing the unknown parameters to range across values found in previously published studies. Conclusions: The use of a robotic device can reduce health care costs by preventing errors that can cause adverse drug events. PMID:25477598

  6. Estimates of Mode-S EHS aircraft-derived wind observation errors using triple collocation

    NASA Astrophysics Data System (ADS)

    de Haan, Siebren

    2016-08-01

    Information on the accuracy of meteorological observation is essential to assess the applicability of the measurements. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with the model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from 2 m temperature observation to satellite radiances. The drawback is that these comparisons also contain the (unknown) model error. By applying the so-called triple-collocation method , on two independent observations at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained utilizing information from air traffic control surveillance radar with Selective Mode Enhanced Surveillance capabilities Mode-S EHS, see. Radial wind measurements from Doppler weather radar and wind vector measurements from sodar, together with equivalents from a non-hydrostatic numerical weather prediction model, are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind (zonal and meridional) observation error is estimated to be less than 1.4 ± 0.1 m s-1 near the surface and around 1.1 ± 0.3 m s-1 at 500 hPa.

  7. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  8. Simple Monte Carlo methods to estimate the spectra evaluation error in differential-optical-absorption spectroscopy.

    PubMed

    Hausmann, M; Brandenburger, U; Brauers, T; Dorn, H P

    1999-01-20

    Differential-optical-absorption spectroscopy (DOAS) permits the sensitive measurement of concentrations of trace gases in the atmosphere. DOAS is a technique of well-defined accuracy; however, the calculation of a statistically sound measurement precision is still an unsolved problem. Usually one evaluates DOAS spectra by performing least-squares fits of reference absorption spectra to the measured atmospheric absorption spectra. Inasmuch as the absorbance from atmospheric trace gases is usually very weak, with optical densities in the range from 10(-5) to 10(-3), interference caused by the occurrence of nonreproducible spectral artifacts often determines the detection limit and the measurement precision. These spectral artifacts bias the least-squares fitting result in two respects. First, spectral artifacts to some extent are falsely interpreted as real absorption, and second, spectral artifacts add nonstatistical noise to spectral residuals, which results in a significant misestimation of the least-squares fitting error. We introduce two new approaches to investigate the evaluation errors of DOAS spectra accurately. The first method, residual inspection by cyclic displacement, estimates the effect of false interpretation of the artifact structures. The second method applies a statistical bootstrap algorithm to estimate properly the error of fitting, even in cases when the condition of random and independent scatter of the residual signal is not fulfilled. Evaluation of simulated atmospheric measurement spectra shows that a combination of the results of both methods yields a good estimate of the spectra evaluation error to within an uncertainty of ~10%.

  9. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.

    PubMed

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.

  10. Wrinkles in the rare biosphere: Pyrosequencing errors can lead to artificial inflation of diversity estimates

    SciTech Connect

    Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip

    2009-08-01

    Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.

  11. Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata

    USGS Publications Warehouse

    Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.

    2012-01-01

    Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).

  12. A variational method for finite element stress recovery and error estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Macy, S. C.

    1993-01-01

    A variational method for obtaining smoothed stresses from a finite element derived nonsmooth stress field is presented. The method is based on minimizing a functional involving discrete least-squares error plus a penalty constraint that ensures smoothness of the stress field. An equivalent accuracy criterion is developed for the smoothing analysis which results in a C sup 1-continuous smoothed stress field possessing the same order of accuracy as that found at the superconvergent optimal stress points of the original finite element analysis. Application of the smoothing analysis to residual error estimation is also demonstrated.

  13. First order error propagation of the procrustes method for 3D attitude estimation.

    PubMed

    Dorst, Leo

    2005-02-01

    The well-known Procrustes method determines the optimal rigid body motion that registers two point clouds by minimizing the square distances of the residuals. In this paper, we perform the first order error analysis of this method for the 3D case, fully specifying how directional noise in the point clouds affects the estimated parameters of the rigid body motion. These results are much more specific than the error bounds which have been established in numerical analysis. We provide an intuitive understanding of the outcome to facilitate direct use in applications.

  14. Systematic Errors that are Due to the Monochromatic-Equivalent Radiative Transfer Approximation in Thermal Emission Problems.

    PubMed

    Turner, D S

    2000-11-01

    An underlying assumption of data assimilation models is that the radiative transfer model used by them can simulate observed radiances with zero bias and small error. For practical reasons a fast parameterized radiative transfer model is used instead of a highly accurate line-by-line model. These fast models usually replace the spectral integration of the product of the transmittance and the Planck function with a monochromatic equivalent, namely, the product of a spectrally averaged transmittance and a spectrally averaged Planck function. The error of using this equivalent form is commonly assumed to be negligible. However, this error is not necessarily negligible and introduces a systematic height-dependent bias to the assimilation scheme. Although the bias could be corrected by a separate bias correction scheme, it is more effective to correct its source, the fast radiative transfer model. I examine the magnitude of error when the monochromatic-equivalent approach is used and demonstrate how a fast parameterized radiative model with Planck-weighted mean transmittances can effectively reduce if not eliminate these errors at source. I focus on channel 12 of the High-Resolution Infrared Radiation Sounder onboard the National Oceanic and Atmospheric Administration (NOAA)-14 satellite that, among all the channels of this instrument, displays the largest error.

  15. Macroscale water fluxes 1. Quantifying errors in the estimation of basin mean precipitation

    NASA Astrophysics Data System (ADS)

    Milly, P. C. D.; Dunne, K. A.

    2002-10-01

    Developments in analysis and modeling of continental water and energy balances are hindered by the limited availability and quality of observational data. The lack of information on error characteristics of basin water supply is an especially serious limitation. Here we describe the development and testing of methods for quantifying several errors in basin mean precipitation, both in the long-term mean and in the monthly and annual anomalies. To quantify errors in the long-term mean, two error indices are developed and tested with positive results. The first provides an estimate of the variance of the spatial sampling error of long-term basin mean precipitation obtained from a gauge network, in the absence of orographic effects; this estimate is obtained by use only of the gauge records. The second gives a simple estimate of the basin mean orographic bias as a function of the topographic structure of the basin and the locations of gauges therein. Neither index requires restrictive statistical assumptions (such as spatial homogeneity) about the precipitation process. Adjustments of precipitation for gauge bias and estimates of the adjustment errors are made by applying results of a previous study. Additionally, standard correlation-based methods are applied for the quantification of spatial sampling errors in the estimation of monthly and annual values of basin mean precipitation. These methods also perform well, as indicated by network subsampling tests in densely gauged basins. The methods are developed and applied with data for 175 large (median area of 51,000 km2) river basins of the world for which contemporaneous, continuous (missing fewer than 2% of data values), long-term (median record length of 54 years) river discharge records are also available. Spatial coverage of the resulting river basin data set is greatest in the middle latitudes, though many basins are located in the tropics and the high latitudes, and the data set spans the major climatic and

  16. Matching post-Newtonian and numerical relativity waveforms: Systematic errors and a new phenomenological model for nonprecessing black hole binaries

    SciTech Connect

    Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B.; Ajith, P.; Bruegmann, B.; Hannam, M.; Husa, S.; Pollney, D.; Reisswig, C.; Seiler, J.

    2010-09-15

    We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.

  17. Estimates of ocean forecast error covariance derived from Hessian Singular Vectors

    NASA Astrophysics Data System (ADS)

    Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.

    2015-05-01

    Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual

  18. Systematic errors in respiratory gating due to intrafraction deformations of the liver

    SciTech Connect

    Siebenthal, Martin von; Szekely, Gabor; Lomax, Antony J.; Cattin, Philippe C.

    2007-09-15

    This article shows the limitations of respiratory gating due to intrafraction deformations of the right liver lobe. The variability of organ shape and motion over tens of minutes was taken into account for this evaluation, which closes the gap between short-term analysis of a few regular cycles, as it is possible with 4DCT, and long-term analysis of interfraction motion. Time resolved MR volumes (4D MR sequences) were reconstructed for 12 volunteers and subsequent non-rigid registration provided estimates of the 3D trajectories of points within the liver over time. The full motion during free breathing and its distribution over the liver were quantified and respiratory gating was simulated to determine the gating accuracy for different gating signals, duty cycles, and different intervals between patient setup and treatment. Gating effectively compensated for the respiratory motion within short sequences (3 min), but deformations, mainly in the anterior inferior part (Couinaud segments IVb and V), led to systematic deviations from the setup position of more than 5 mm in 7 of 12 subjects after 20 min. We conclude that measurements over a few breathing cycles should not be used as a proof of accurate reproducibility of motion, not even within the same fraction, if it is longer than a few minutes. Although the diaphragm shows the largest magnitude of motion, it should not be used to assess the gating accuracy over the entire liver because the reproducibility is typically much more limited in inferior parts. Simple gating signals, such as the trajectory of skin motion, can detect the exhalation phase, but do not allow for an absolute localization of the complete liver over longer periods because the drift of these signals does not necessarily correlate with the internal drift.

  19. DTI quality control assessment via error estimation from Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.

    2013-03-01

    Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.

  20. Mass load estimation errors utilizing grab sampling strategies in a karst watershed

    USGS Publications Warehouse

    Fogle, A.W.; Taraba, J.L.; Dinger, J.S.

    2003-01-01

    Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.

  1. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  2. On the error in crop acreage estimation using satellite (LANDSAT) data

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator)

    1983-01-01

    The problem of crop acreage estimation using satellite data is discussed. Bias and variance of a crop proportion estimate in an area segment obtained from the classification of its multispectral sensor data are derived as functions of the means, variances, and covariance of error rates. The linear discriminant analysis and the class proportion estimation for the two class case are extended to include a third class of measurement units, where these units are mixed on ground. Special attention is given to the investigation of mislabeling in training samples and its effect on crop proportion estimation. It is shown that the bias and variance of the estimate of a specific crop acreage proportion increase as the disparity in mislabeling rates between two classes increases. Some interaction is shown to take place, causing the bias and the variance to decrease at first and then to increase, as the mixed unit class varies in size from 0 to 50 percent of the total area segment.

  3. Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration

    NASA Astrophysics Data System (ADS)

    Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.

    2014-12-01

    Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability

  4. Robust Estimator for Non-Line-of-Sight Error Mitigation in Indoor Localization

    NASA Astrophysics Data System (ADS)

    Casas, R.; Marco, A.; Guerrero, J. J.; Falcó, J.

    2006-12-01

    Indoor localization systems are undoubtedly of interest in many application fields. Like outdoor systems, they suffer from non-line-of-sight (NLOS) errors which hinder their robustness and accuracy. Though many ad hoc techniques have been developed to deal with this problem, unfortunately most of them are not applicable indoors due to the high variability of the environment (movement of furniture and of people, etc.). In this paper, we describe the use of robust regression techniques to detect and reject NLOS measures in a location estimation using multilateration. We show how the least-median-of-squares technique can be used to overcome the effects of NLOS errors, even in environments with little infrastructure, and validate its suitability by comparing it to other methods described in the bibliography. We obtained remarkable results when using it in a real indoor positioning system that works with Bluetooth and ultrasound (BLUPS), even when nearly half the measures suffered from NLOS or other coarse errors.

  5. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    USGS Publications Warehouse

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  6. Multiscale error analysis, correction, and predictive uncertainty estimation in a flood forecasting system

    NASA Astrophysics Data System (ADS)

    Bogner, K.; Pappenberger, F.

    2011-07-01

    River discharge predictions often show errors that degrade the quality of forecasts. Three different methods of error correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next time steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The error correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean absolute error, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast errors with scale properties of unknown source and statistical structure.

  7. Estimation of random errors for lidar based on noise scale factor

    NASA Astrophysics Data System (ADS)

    Wang, Huan-Xue; Liu, Jian-Guo; Zhang, Tian-Shu

    2015-08-01

    Estimation of random errors, which are due to shot noise of photomultiplier tube (PMT) or avalanche photodiode (APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship, noise scale factor (NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable. Project supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB05040300) and the National Natural Science Foundation of China (Grant No. 41205119).

  8. A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation

    NASA Technical Reports Server (NTRS)

    Tessler, A.; Riggs, H. R.; Dambach, M.

    1998-01-01

    A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.

  9. Superconvergence and recovery type a posteriori error estimation for hybrid stress finite element method

    NASA Astrophysics Data System (ADS)

    Bai, YanHong; Wu, YongKe; Xie, XiaoPing

    2016-09-01

    Superconvergence and a posteriori error estimators of recovery type are analyzed for the 4-node hybrid stress quadrilateral finite element method proposed by Pian and Sumihara (Int. J. Numer. Meth. Engrg., 1984, 20: 1685-1695) for linear elasticity problems. Uniform superconvergence of order $O(h^{1+\\min\\{\\alpha,1\\}})$ with respect to the Lam\\'{e} constant $\\lambda$ is established for both the recovered gradients of the displacement vector and the stress tensor under a mesh assumption, where $\\alpha>0$ is a parameter characterizing the distortion of meshes from parallelograms to quadrilaterals. A posteriori error estimators based on the recovered quantities are shown to be asymptotically exact. Numerical experiments confirm the theoretical results.

  10. Error estimation and adaptive order nodal method for solving multidimensional transport problems

    SciTech Connect

    Zamonsky, O.M.; Gho, C.J.; Azmy, Y.Y.

    1998-01-01

    The authors propose a modification of the Arbitrarily High Order Transport Nodal method whereby they solve each node and each direction using different expansion order. With this feature and a previously proposed a posteriori error estimator they develop an adaptive order scheme to automatically improve the accuracy of the solution of the transport equation. They implemented the modified nodal method, the error estimator and the adaptive order scheme into a discrete-ordinates code for solving monoenergetic, fixed source, isotropic scattering problems in two-dimensional Cartesian geometry. They solve two test problems with large homogeneous regions to test the adaptive order scheme. The results show that using the adaptive process the storage requirements are reduced while preserving the accuracy of the results.

  11. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level.

  12. A New Stratified Sampling Procedure which Decreases Error Estimation of Varroa Mite Number on Sticky Boards.

    PubMed

    Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y

    2015-06-01

    A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. PMID:26470273

  13. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  14. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J.D. Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  15. Estimating sampling error of evolutionary statistics based on genetic covariance matrices using maximum likelihood.

    PubMed

    Houle, D; Meyer, K

    2015-08-01

    We explore the estimation of uncertainty in evolutionary parameters using a recently devised approach for resampling entire additive genetic variance-covariance matrices (G). Large-sample theory shows that maximum-likelihood estimates (including restricted maximum likelihood, REML) asymptotically have a multivariate normal distribution, with covariance matrix derived from the inverse of the information matrix, and mean equal to the estimated G. This suggests that sampling estimates of G from this distribution can be used to assess the variability of estimates of G, and of functions of G. We refer to this as the REML-MVN method. This has been implemented in the mixed-model program WOMBAT. Estimates of sampling variances from REML-MVN were compared to those from the parametric bootstrap and from a Bayesian Markov chain Monte Carlo (MCMC) approach (implemented in the R package MCMCglmm). We apply each approach to evolvability statistics previously estimated for a large, 20-dimensional data set for Drosophila wings. REML-MVN and MCMC sampling variances are close to those estimated with the parametric bootstrap. Both slightly underestimate the error in the best-estimated aspects of the G matrix. REML analysis supports the previous conclusion that the G matrix for this population is full rank. REML-MVN is computationally very efficient, making it an attractive alternative to both data resampling and MCMC approaches to assessing confidence in parameters of evolutionary interest. PMID:26079756

  16. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  17. Estimation of Aperture Errors with Direct Interferometer-Output Feedback for Spacecraft Formation Control

    NASA Technical Reports Server (NTRS)

    Lu, Hui-Ling; Cheng, Victor H. L.; Leitner, Jesse A.; Carpenter, Kenneth G.

    2004-01-01

    Long-baseline space interferometers involving formation flying of multiple spacecraft hold great promise as future space missions for high-resolution imagery. The major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to control these spacecraft and their optics payloads in the specified configuration accurately. In this paper, we describe our effort toward fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present an estimation procedure that effectively extracts relative x/y translational exit pupil aperture deviations from the raw interferometric image with small estimation errors.

  18. Density functionals for surface science: Exchange-correlation model development with Bayesian error estimation

    NASA Astrophysics Data System (ADS)

    Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.

    2012-06-01

    A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.

  19. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  20. Estimate of procession and polar motion errors from planetary encounter station location solutions

    NASA Technical Reports Server (NTRS)

    Pease, G. E.

    1978-01-01

    Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.

  1. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results.

  2. Accuracy and sampling error of two age estimation techniques using rib histomorphometry on a modern sample.

    PubMed

    García-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F

    2016-02-01

    Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout et al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout et al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout et al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. PMID:26698389

  3. Error analysis of leaf area estimates made from allometric regression models

    NASA Technical Reports Server (NTRS)

    Feiveson, A. H.; Chhikara, R. S.

    1986-01-01

    Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).

  4. A semiempirical error estimation technique for PWV derived from atmospheric radiosonde data

    NASA Astrophysics Data System (ADS)

    Castro-Almazán, Julio A.; Pérez-Jordán, Gabriel; Muñoz-Tuñón, Casiana

    2016-09-01

    A semiempirical method for estimating the error and optimum number of sampled levels in precipitable water vapour (PWV) determinations from atmospheric radiosoundings is proposed. Two terms have been considered: the uncertainties in the measurements and the sampling error. Also, the uncertainty has been separated in the variance and covariance components. The sampling and covariance components have been modelled from an empirical dataset of 205 high-vertical-resolution radiosounding profiles, equipped with Vaisala RS80 and RS92 sondes at four different locations: Güímar (GUI) in Tenerife, at sea level, and the astronomical observatory at Roque de los Muchachos (ORM, 2300 m a.s.l.) on La Palma (both on the Canary Islands, Spain), Lindenberg (LIN) in continental Germany, and Ny-Ålesund (NYA) in the Svalbard Islands, within the Arctic Circle. The balloons at the ORM were launched during intensive and unique site-testing runs carried out in 1990 and 1995, while the data for the other sites were obtained from radiosounding stations operating for a period of 1 year (2013-2014). The PWV values ranged between ˜ 0.9 and ˜ 41 mm. The method sub-samples the profile for error minimization. The result is the minimum error and the optimum number of levels. The results obtained in the four sites studied showed that the ORM is the driest of the four locations and the one with the fastest vertical decay of PWV. The exponential autocorrelation pressure lags ranged from 175 hPa (ORM) to 500 hPa (LIN). The results show a coherent behaviour with no biases as a function of the profile. The final error is roughly proportional to PWV whereas the optimum number of levels (N0) is the reverse. The value of N0 is less than 400 for 77 % of the profiles and the absolute errors are always < 0.6 mm. The median relative error is 2.0 ± 0.7 % and the 90th percentile P90 = 4.6 %. Therefore, whereas a radiosounding samples at least N0 uniform vertical levels, depending on the water

  5. Towards integrated error estimation and lag-aware data assimilation for operational streamflow forecasting

    NASA Astrophysics Data System (ADS)

    Li, Y.; Ryu, D.; Western, A. W.; Wang, Q.; Robertson, D.; Crow, W. T.

    2013-12-01

    Timely and reliable streamflow forecasting with acceptable accuracy is fundamental for flood response and risk management. However, streamflow forecasting models are subject to uncertainties from inputs, state variables, model parameters and structures. This has led to an ongoing development of methods for uncertainty quantification (e.g. generalized likelihood and Bayesian approaches) and methods for uncertainty reduction (e.g. sequential and variational data assimilation approaches). These two classes of methods are distinct yet related, e.g., the validity of data assimilation is essentially determined by the reliability of error specification. Error specification has been one of the most challenging areas in hydrologic data assimilation and there is a major opportunity for implementing uncertainty quantification approaches to inform both model and observation uncertainties. In this study, ensemble data assimilation methods are combined with the maximum a posteriori (MAP) error estimation approach to construct an integrated error estimation and data assimilation scheme for operational streamflow forecasting. We contrast the performance of two different data assimilation schemes: a lag-aware ensemble Kalman smoother (EnKS) and the conventional ensemble Kalman filter (EnKF). The schemes are implemented for a catchment upstream of Myrtleford in the Ovens river basin, Australia to assimilate real-time discharge observations into a conceptual catchment model, modèle du Génie Rural à 4 paramètres Horaire (GR4H). The performance of the integrated system is evaluated in both a synthetic forecasting scenario with observed precipitation and an operational forecasting scenario with Numerical Weather Prediction (NWP) forecast rainfall. The results show that the error parameters estimated by the MAP approach generates a reliable spread of streamflow prediction. Continuous state updating reduces uncertainty in initial states and thereby improves the forecasting accuracy

  6. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    PubMed

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  7. Sieve Estimation of Constant and Time-Varying Coefficients in Nonlinear Ordinary Differential Equation Models by Considering Both Numerical Error and Measurement Error

    PubMed Central

    Xue, Hongqi; Miao, Hongyu; Wu, Hulin

    2010-01-01

    This article considers estimation of constant and time-varying coefficients in nonlinear ordinary differential equation (ODE) models where analytic closed-form solutions are not available. The numerical solution-based nonlinear least squares (NLS) estimator is investigated in this study. A numerical algorithm such as the Runge–Kutta method is used to approximate the ODE solution. The asymptotic properties are established for the proposed estimators considering both numerical error and measurement error. The B-spline is used to approximate the time-varying coefficients, and the corresponding asymptotic theories in this case are investigated under the framework of the sieve approach. Our results show that if the maximum step size of the p-order numerical algorithm goes to zero at a rate faster than n−1/(p∧4), the numerical error is negligible compared to the measurement error. This result provides a theoretical guidance in selection of the step size for numerical evaluations of ODEs. Moreover, we have shown that the numerical solution-based NLS estimator and the sieve NLS estimator are strongly consistent. The sieve estimator of constant parameters is asymptotically normal with the same asymptotic co-variance as that of the case where the true ODE solution is exactly known, while the estimator of the time-varying parameter has the optimal convergence rate under some regularity conditions. The theoretical results are also developed for the case when the step size of the ODE numerical solver does not go to zero fast enough or the numerical error is comparable to the measurement error. We illustrate our approach with both simulation studies and clinical data on HIV viral dynamics. PMID:21132064

  8. Flood samples from a three-parameter lognormal population with historic information: The asymptotic standard error of estimate of the T-year flood

    NASA Astrophysics Data System (ADS)

    Condie, Robert

    1986-06-01

    The series of annual peak flows obtained from a recent continuous flow record, together with any historic floods or information, are treated as a censored sample from a three-parameter lognormal population. The logarithmic likelihood function is presented in terms of the fully specified floods, the historic information with the censoring threshold, and the parameters to be determined. Maximum likelihood estimators are given as a set of three transcendental equations, which when solved give maximum likelihood estimates of parameters. The T-year flood is expressible as a function of these parameters and the standard normal variate t. These parameters are subject to sampling variances and covariances whereas t is not. From the logarithmic likelihood function the inverse variance—covariance matrix is then derived, and by inversion gives the sampling variances and covariances of the parameters. Entering these in the general equation for the variance of estimate of a function of three variables leads to the asymptotic standard error of estimate of the T-year flood. The method is illustrated by its application to a river with historic data, where 10 yrs of only overbank flows were available in a historic period of 35 yrs prior to the collection of a systematic record. The value of the historic information is assessed in terms of reduction of the standard error of estimate, and the 10 yrs of overbank flows together with the historic information are roughly equivalent to a 26 yr extension of the systematic record.

  9. Estimating and comparing microbial diversity in the presence of sequencing errors

    PubMed Central

    Chiu, Chun-Huo

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures’ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This

  10. Estimating and comparing microbial diversity in the presence of sequencing errors.

    PubMed

    Chiu, Chun-Huo; Chao, Anne

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures' emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach

  11. Estimating and comparing microbial diversity in the presence of sequencing errors.

    PubMed

    Chiu, Chun-Huo; Chao, Anne

    2016-01-01

    Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures' emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach

  12. Eliminating Obliquity Error from the Estimation of Ionospheric Delay in a Satellite-Based Augmentation System

    NASA Technical Reports Server (NTRS)

    Sparks, Lawrence

    2013-01-01

    Current satellite-based augmentation systems estimate ionospheric delay using algorithms that assume the electron density of the ionosphere is non-negligible only in a thin shell located near the peak of the actual profile. In its initial operating capability, for example, the Wide Area Augmentation System incorporated the thin shell model into an estimation algorithm that calculates vertical delay using a planar fit. Under disturbed conditions or at low latitude where ionospheric structure is complex, however, the thin shell approximation can serve as a significant source of estimation error. A recent upgrade of the system replaced the planar fit algorithm with an algorithm based upon kriging. The upgrade owes its success, in part, to the ability of kriging to mitigate the error due to this approximation. Previously, alternative delay estimation algorithms have been proposed that eliminate the need for invoking the thin shell model altogether. Prior analyses have compared the accuracy achieved by these methods to the accuracy achieved by the planar fit algorithm. This paper extends these analyses to include a comparison with the accuracy achieved by kriging. It concludes by examining how a satellite-based augmentation system might be implemented without recourse to the thin shell approximation.

  13. Compensation technique for the intrinsic error in ultrasound motion estimation using a speckle tracking method

    NASA Astrophysics Data System (ADS)

    Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru

    2015-07-01

    High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.

  14. Improving occupancy estimation when two types of observational error occur: Non-detection and species misidentification

    USGS Publications Warehouse

    Miller, David A.; Nichols, J.D.; McClintock, B.T.; Grant, E.H.C.; Bailey, L.L.; Weir, L.A.

    2011-01-01

    Efforts to draw inferences about species occurrence frequently account for false negatives, the common situation when individuals of a species are not detected even when a site is occupied. However, recent studies suggest the need to also deal with false positives, which occur when species are misidentified so that a species is recorded as detected when a site is unoccupied. Bias in estimators of occupancy, colonization, and extinction can be severe when false positives occur. Accordingly, we propose models that simultaneously account for both types of error. Our approach can be used to improve estimates of occupancy for study designs where a subset of detections is of a type or method for which false positives can be assumed to not occur. We illustrate properties of the estimators with simulations and data for three species of frogs. We show that models that account for possible misidentification have greater support (lower AIC for two species) and can yield substantially different occupancy estimates than those that do not. When the potential for misidentification exists, researchers should consider analytical techniques that can account for this source of error, such as those presented here. ?? 2011 by the Ecological Society of America..

  15. Inversion for mantle viscosity profiles constrained by dynamic topography and the geoid, and their estimated errors

    NASA Astrophysics Data System (ADS)

    Panasyuk, Svetlana V.; Hager, Bradford H.

    2000-12-01

    We perform a joint inversion of Earth's geoid and dynamic topography for radial mantle viscosity structure using a number of models of interior density heterogeneities, including an assessment of the error budget. We identify three classes of errors: those related to the density perturbations used as input, those due to insufficiently constrained observables, and those due to the limitations of our analytical model. We estimate the amplitudes of these errors in the spectral domain. Our minimization function weights the squared deviations of the compared quantities with the corresponding errors, so that the components with more reliability contribute to the solution more strongly than less certain ones. We develop a quasi-analytical solution for mantle flow in a compressible, spherical shell with Newtonian rheology, allowing for continuous radial variations of viscosity, together with a possible reduction of viscosity within the phase change regions due to the effects of transformational superplasticity. The inversion reveals three distinct families of viscosity profiles, all of which have an order of magnitude stiffening within the lower mantle, with a soft D'' layer below. The main distinction among the families is the location of the lowest-viscosity region-directly beneath the lithosphere, just above 400km depth or just above 670km depth. All profiles have a reduction of viscosity within one or more of the major phase transformations, leading to reduced dynamic topography, so that whole-mantle convection is consistent with small surface topography.

  16. SANG-a kernel density estimator incorporating information about the measurement error

    NASA Astrophysics Data System (ADS)

    Hayes, Robert

    Analyzing nominally large data sets having a measurement error unique to each entry is evaluated with a novel technique. This work begins with a review of modern analytical methodologies such as histograming data, ANOVA, regression (weighted and unweighted) along with various error propagation and estimation techniques. It is shown that by assuming the errors obey a functional distribution (such as normal or Poisson), a superposition of the assumed forms then provides the most comprehensive and informative graphical depiction of the data set's statistical information. The resultant approach is evaluated only for normally distributed errors so that the method is effectively a Superposition Analysis of Normalized Gaussians (SANG). SANG is shown to be easily calculated and highly informative in a single graph from what would otherwise require multiple analysis and figures to accomplish the same result. The work is demonstrated using historical radiochemistry measurements from a transuranic waste geological repository's environmental monitoring program. This work paid for under NRC-HQ-84-14-G-0059.

  17. Optimum data weighting and error calibration for estimation of gravitational parameters

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.

    1989-01-01

    A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.

  18. Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains

    NASA Technical Reports Server (NTRS)

    Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang

    2013-01-01

    Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.

  19. PEET: a Matlab tool for estimating physical gate errors in quantum information processing systems

    NASA Astrophysics Data System (ADS)

    Hocker, David; Kosut, Robert; Rabitz, Herschel

    2016-09-01

    A Physical Error Estimation Tool (PEET) is introduced in Matlab for predicting physical gate errors of quantum information processing (QIP) operations by constructing and then simulating gate sequences for a wide variety of user-defined, Hamiltonian-based physical systems. PEET is designed to accommodate the interdisciplinary needs of quantum computing design by assessing gate performance for users familiar with the underlying physics of QIP, as well as those interested in higher-level computing operations. The structure of PEET separates the bulk of the physical details of a system into Gate objects, while the construction of quantum computing gate operations are contained in GateSequence objects. Gate errors are estimated by Monte Carlo sampling of noisy gate operations. The main utility of PEET, though, is the implementation of QuantumControl methods that act to generate and then test gate sequence and pulse-shaping techniques for QIP performance. This work details the structure of PEET and gives instructive examples for its operation.

  20. Adjoint-based error estimation and mesh adaptation for the correction procedure via reconstruction method

    NASA Astrophysics Data System (ADS)

    Shi, Lei; Wang, Z. J.

    2015-08-01

    Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  2. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  3. Regional estimation of groundwater arsenic concentrations through systematical dynamic-neural modeling

    NASA Astrophysics Data System (ADS)

    Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min

    2013-08-01

    Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 μg l-1 for training; 106.13 μg l-1 for validation) outperforms the BPNN (RMSE: 121.54 μg l-1 for training; 143.37 μg l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 μg l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the

  4. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    SciTech Connect

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  5. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    NASA Astrophysics Data System (ADS)

    Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif

    2015-12-01

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  6. Systematic Errors in Stereo PIV When Imaging through a Glass Window

    NASA Technical Reports Server (NTRS)

    Green, Richard; McAlister, Kenneth W.

    2004-01-01

    This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.

  7. Estimating regression coefficients from clustered samples: Sampling errors and optimum sample allocation

    NASA Technical Reports Server (NTRS)

    Kalton, G.

    1983-01-01

    A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.

  8. Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps

    SciTech Connect

    Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.

    1996-01-04

    The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.

  9. Sherborn's Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature.

    PubMed

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    This study is aimed to shed light on the reliability of Sherborn's Index Animalium in terms of modern usage. The AnimalBase project spent several years' worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn's work and verify the completeness and correctness of his record. We found the reliability of Sherborn's resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn's record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn's data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn's own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn's resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2-4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  10. Sherborn’s Index Animalium: New names, systematic errors and availability of names in the light of modern nomenclature

    PubMed Central

    Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra

    2016-01-01

    Abstract This study is aimed to shed light on the reliability of Sherborn’s Index Animalium in terms of modern usage. The AnimalBase project spent several years’ worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn’s work and verify the completeness and correctness of his record. We found the reliability of Sherborn’s resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn’s record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn’s data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn’s own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn’s resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2–4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658

  11. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error

    PubMed Central

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J.; Song, Xubo

    2014-01-01

    Purpose: Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. Methods: The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Results: Experiments with simulated datasets, images of an ex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors’ method. Simulated and real cardiac sequences tests showed that results in the authors’ method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors’ method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors’ method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. Conclusions: The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors’ method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods. PMID:24784402

  12. mBEEF-vdW: Robust fitting of error estimation density functionals

    NASA Astrophysics Data System (ADS)

    Lundgaard, Keld T.; Wellendorff, Jess; Voss, Johannes; Jacobsen, Karsten W.; Bligaard, Thomas

    2016-06-01

    We propose a general-purpose semilocal/nonlocal exchange-correlation functional approximation, named mBEEF-vdW. The exchange is a meta generalized gradient approximation, and the correlation is a semilocal and nonlocal mixture, with the Rutgers-Chalmers approximation for van der Waals (vdW) forces. The functional is fitted within the Bayesian error estimation functional (BEEF) framework [J. Wellendorff et al., Phys. Rev. B 85, 235149 (2012), 10.1103/PhysRevB.85.235149; J. Wellendorff et al., J. Chem. Phys. 140, 144107 (2014), 10.1063/1.4870397]. We improve the previously used fitting procedures by introducing a robust MM-estimator based loss function, reducing the sensitivity to outliers in the datasets. To more reliably determine the optimal model complexity, we furthermore introduce a generalization of the bootstrap 0.632 estimator with hierarchical bootstrap sampling and geometric mean estimator over the training datasets. Using this estimator, we show that the robust loss function leads to a 10 % improvement in the estimated prediction error over the previously used least-squares loss function. The mBEEF-vdW functional is benchmarked against popular density functional approximations over a wide range of datasets relevant for heterogeneous catalysis, including datasets that were not used for its training. Overall, we find that mBEEF-vdW has a higher general accuracy than competing popular functionals, and it is one of the best performing functionals on chemisorption systems, surface energies, lattice constants, and dispersion. We also show the potential-energy curve of graphene on the nickel(111) surface, where mBEEF-vdW matches the experimental binding length. mBEEF-vdW is currently available in gpaw and other density functional theory codes through Libxc, version 3.0.0.

  13. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  14. A parametric multiclass Bayes error estimator for the multispectral scanner spatial model performance evaluation

    NASA Technical Reports Server (NTRS)

    Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)

    1978-01-01

    The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.

  15. New approaches to online estimation of electromagnetic tracking errors for laparoscopic ultrasonography.

    PubMed

    Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir

    2008-09-01

    In abdominal surgery, a laparoscopic ultrasound transducer is commonly used to detect lesions such as metastases. The determination and visualization of the position and orientation of its flexible tip in relation to the patient or other surgical instruments can be a great support for surgeons using the transducer intraoperatively. This difficult subject has recently received attention from the scientific community. Electromagnetic tracking systems can be applied to track the flexible tip; however, current limitations of electromagnetic tracking include its accuracy and sensibility, i.e., the magnetic field can be distorted by ferromagnetic material. This paper presents two novel methods for estimation of electromagnetic tracking error. Based on optical tracking of the laparoscope, as well as on magneto-optic and visual tracking of the transducer, these methods automatically detect in 85% of all cases whether tracking is erroneous or not, and reduce tracking errors by up to 2.5 mm.

  16. Systematic Review and Harmonization of Life Cycle GHG Emission Estimates for Electricity Generation Technologies (Presentation)

    SciTech Connect

    Heath, G.

    2012-06-01

    This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.

  17. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  18. Estimation of cortical magnification from positional error in normally sighted and amblyopic subjects

    PubMed Central

    Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.

    2015-01-01

    We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg−1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341

  19. Analysis of open-loop conical scan pointing error and variance estimators

    NASA Technical Reports Server (NTRS)

    Alvarez, L. S.

    1993-01-01

    General pointing error and variance estimators for an open-loop conical scan (conscan) system are derived and analyzed. The conscan algorithm is modeled as a weighted least-squares estimator whose inputs are samples of receiver carrier power and its associated measurement uncertainty. When the assumptions of constant measurement noise and zero pointing error estimation are applied, the variance equation is then strictly a function of the carrier power to uncertainty ratio and the operator selectable radius and period input to the algorithm. The performance equation is applied to a 34-m mirror-based beam-waveguide conscan system interfaced with the Block V Receiver Subsystem tracking a Ka-band (32-GHz) downlink. It is shown that for a carrier-to-noise power ratio greater than or equal to 30 dB-Hz, the conscan period for Ka-band operation may be chosen well below the current DSN minimum of 32 sec. The analysis presented forms the basis of future conscan work in both research and development as well as for the upcoming DSN antenna controller upgrade for the new DSS-24 34-m beam-waveguide antenna.

  20. Systematic reduction of sign errors in many-body calculations of atoms and molecules

    SciTech Connect

    Bajdich, Michal; Tiago, Murilo L; Hood, Randolph Q.; Kent, Paul R; Reboredo, Fernando A

    2010-01-01

    The self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B {\\bf 79} 195117 (2009), {\\it ibid.} {\\bf 80} 125110 (2009)] is applied to the calculation of ground state states of atoms and molecules. By direct comparison with accurate configuration interaction results we show that applying the SHDMC method to the oxygen atom leads to systematic convergence towards the exact ground state wave function. We present results for the small but challenging N$_2$ molecule, where results obtained via the energy minimization method and SHDMC are within experimental accuracy of 0.08 eV. Moreover, we demonstrate that the algorithm is robust enough to be used for the calculations of systems at least as large as C$_{20}$ starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the fermion sign problem in electronic structure calculations.

  1. Possible evidence for a variable fine-structure constant from QSO absorption lines: systematic errors

    NASA Astrophysics Data System (ADS)

    Murphy, M. T.; Webb, J. K.; Flambaum, V. V.; Churchill, C. W.; Prochaska, J. X.

    2001-11-01

    Comparison of quasar (QSO) absorption spectra with laboratory spectra allows us to probe possible variations in the fundamental constants over cosmological time-scales. In a companion paper we present an analysis of Keck/HIRES spectra and report possible evidence suggesting that the fine-structure constant, α, may have been smaller in the past: [formmu2]Δα/α=(-0.72+/-0.18)×10-5 over the redshift range [formmu3]0.5systematic effects. Most of these do not significantly influence our results. When we correct for those which do produce a significant systematic effect in the data, the deviation of [formmu4]Δα/α from zero becomes more significant. We are led increasingly to the interpretation that α was slightly smaller in the past.

  2. Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations

    NASA Technical Reports Server (NTRS)

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang

    2006-01-01

    In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:

  3. Estimating random errors due to shot noise in backscatter lidar observations.

    PubMed

    Liu, Zhaoyan; Hunt, William; Vaughan, Mark; Hostetler, Chris; McGill, Matthew; Powell, Kathleen; Winker, David; Hu, Yongxiang

    2006-06-20

    We discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson- distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root mean square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF, uncertainties can be reliably calculated from or for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations lidar and tested using data from the Lidar In-space Technology Experiment.

  4. Optimal error estimates for high order Runge-Kutta methods applied to evolutionary equations

    SciTech Connect

    McKinney, W.R.

    1989-01-01

    Fully discrete approximations to 1-periodic solutions of the Generalized Korteweg de-Vries and the Cahn-Hilliard equations are analyzed. These approximations are generated by an Implicit Runge-Kutta method for the temporal discretization and a Galerkin Finite Element method for the spatial discretization. Furthermore, these approximations may be of arbitrarily high order. In particular, it is shown that the well-known order reduction phenomenon afflicting Implicit Runge Kutta methods does not occur. Numerical results supporting these optimal error estimates for the Korteweg-de Vries equation and indicating the existence of a slow motion manifold for the Cahn-Hilliard equation are also provided.

  5. Error in Estimates of Tissue Material Properties from Shear Wave Dispersion Ultrasound Vibrometry

    PubMed Central

    Urban, Matthew W.; Chen, Shigao; Greenleaf, James F.

    2009-01-01

    Shear wave velocity measurements are used in elasticity imaging to find the shear elasticity and viscosity of tissue. A technique called shear wave dispersion ultrasound vibrometry (SDUV) has been introduced to use the dispersive nature of shear wave velocity to locally estimate the material properties of tissue. Shear waves are created using a multifrequency ultrasound radiation force, and the propagating shear waves are measured a few millimeters away from the excitation point. The shear wave velocity is measured using a repetitive pulse-echo method and Kalman filtering to find the phase of the harmonic shear wave at 2 different locations. A viscoelastic Voigt model and the shear wave velocity measurements at different frequencies are used to find the shear elasticity (μ1) and viscosity (μ2) of the tissue. The purpose of this paper is to report the accuracy of the SDUV method over a range of different values of μ1 and μ2. A motion detection model of a vibrating scattering medium was used to analyze measurement errors of vibration phase in a scattering medium. To assess the accuracy of the SDUV method, we modeled the effects of phase errors on estimates of shear wave velocity and material properties while varying parameters such as shear stiffness and viscosity, shear wave amplitude, the distance between shear wave measurements (Δr), signal-to-noise ratio (SNR) of the ultrasound pulse-echo method, and the frequency range of the measurements. We performed an experiment in a section of porcine muscle to evaluate variation of the aforementioned parameters on the estimated shear wave velocity and material property measurements and to validate the error prediction model. The model showed that errors in the shear wave velocity and material property estimates were minimized by maximizing shear wave amplitude, pulse-echo SNR, Δr, and the bandwidth used for shear wave measurements. The experimental model showed optimum performance could be obtained for Δr = 3-6 mm

  6. A combined approach to the estimation of statistical error of the direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Plotnikov, M. Yu.; Shkarupa, E. V.

    2015-11-01

    Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.

  7. A systematic approach of tracking and reporting medication errors at a tertiary care university hospital, Karachi, Pakistan

    PubMed Central

    Khowaja, Khurshid; Nizar, Rozmin; Merchant, Rashida J; Dias, Jacqueline; Bustamante-Gavino, Irma; Malik, Amina

    2008-01-01

    Introduction: Administering medication is one of the high risk areas for any health professional. It is a multidisciplinary process, which begins with the doctor’s prescription, followed by review and provision by a pharmacist, and ends with preparation and administration by a nurse. Several studies have highlighted a high medication incident rate at several healthcare institutions. Methods: Our study design was exploratory and evaluative and used methodological triangulation. Sample size was of two types. First, a convenient sample of 1000 medication dosages to estimate the medication error (95% CI). We took another sample from subjects involved in medication usage processes such as physicians, nurses, pharmacists, and patients. Two sets of instruments were designed via extensive literature review: a medication tracking error form and a focus group interview questionnaire. Results: Our study findings revealed 100% compliance with a computerized physician order entry (CPOE) system by physicians, nurses, and pharmacists. The main error rate was 5.5% and pharmacists contributed an higher error rate of 2.6% followed by nurses (1.1%) and physicians (1%). Major areas for improvement in error rates were identified: delay in medication delivery, lab results reviewed electronically before prescription, dispension, and administration. PMID:19209247

  8. Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates

    NASA Astrophysics Data System (ADS)

    Wyss, M.; Nekrasova, A.; Kossobokov, V. G.

    2011-12-01

    The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that

  9. Galaxy Cluster Shapes and Systematic Errors in the Hubble Constant as Determined by the Sunyaev-Zel'dovich Effect

    NASA Technical Reports Server (NTRS)

    Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.

    1998-01-01

    Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.

  10. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    PubMed Central

    Herly, Mikkel; Müller, Felix C.; Elberg, Jens J.; Kølle, Stig-Frederik T.; Fischer-Nielsen, Anne; Thomsen, Carsten; Drzewiecki, Krzysztof T.

    2016-01-01

    Summary: Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to contour alterations of the breast. The volume of the altered breast includes not only the injected volume but also tissue previously surrounding the breast. Therefore, the quantitative difference in BV before and after augmentation will differ from the injected volume. Here, we present a new technique to measure BV changes that compensates for this systematic error by defining the boundaries of the breast to immovable osseous pointers. This approach avoids the misinterpretation of tissue included within the expanded boundaries as graft tissue. This new method of analysis may be a reliable tool for assessing BV changes to determine fat graft retention and may be useful for evaluating and comparing available surgical techniques for breast augmentation and reconstruction using fat grafting. PMID:27757341

  11. Diagnostic errors in older patients: a systematic review of incidence and potential causes in seven prevalent diseases

    PubMed Central

    Skinner, Thomas R; Scott, Ian A; Martin, Jennifer H

    2016-01-01

    Background Misdiagnosis, either over- or underdiagnosis, exposes older patients to increased risk of inappropriate or omitted investigations and treatments, psychological distress, and financial burden. Objective To evaluate the frequency and nature of diagnostic errors in 16 conditions prevalent in older patients by undertaking a systematic literature review. Data sources and study selection Cohort studies, cross-sectional studies, or systematic reviews of such studies published in Medline between September 1993 and May 2014 were searched using key search terms of “diagnostic error”, “misdiagnosis”, “accuracy”, “validity”, or “diagnosis” and terms relating to each disease. Data synthesis A total of 938 articles were retrieved. Diagnostic error rates of >10% for both over- and underdiagnosis were seen in chronic obstructive pulmonary disease, dementia, Parkinson’s disease, heart failure, stroke/transient ischemic attack, and acute myocardial infarction. Diabetes was overdiagnosed in <5% of cases. Conclusion Over- and underdiagnosis are common in older patients. Explanations for over-diagnosis include subjective diagnostic criteria and the use of criteria not validated in older patients. Underdiagnosis was associated with long preclinical phases of disease or lack of sensitive diagnostic criteria. Factors that predispose to misdiagnosis in older patients must be emphasized in education and clinical guidelines. PMID:27284262

  12. Height Estimation and Error Assessment of Inland Water Level Time Series calculated by a Kalman Filter Approach using Multi-Mission Satellite Altimetry

    NASA Astrophysics Data System (ADS)

    Schwatke, Christian; Dettmering, Denise; Boergens, Eva

    2015-04-01

    Originally designed for open ocean applications, satellite radar altimetry can also contribute promising results over inland waters. Its measurements help to understand the water cycle of the system earth and makes altimetry to a very useful instrument for hydrology. In this paper, we present our methodology for estimating water level time series over lakes, rivers, reservoirs, and wetlands. Furthermore, the error estimation of the resulting water level time series is demonstrated. For computing the water level time series multi-mission satellite altimetry data is used. The estimation is based on altimeter data from Topex, Jason-1, Jason-2, Geosat, IceSAT, GFO, ERS-2, Envisat, Cryosat, HY-2A, and Saral/Altika - depending on the location of the water body. According to the extent of the investigated water body 1Hz, high-frequent or retracked altimeter measurements can be used. Classification methods such as Support Vector Machine (SVM) and Support Vector Regression (SVR) are applied for the classification of altimeter waveforms and for rejecting outliers. For estimating the water levels we use a Kalman filter approach applied to the grid nodes of a hexagonal grid covering the water body of interest. After applying an error limit on the resulting water level heights of each grid node, a weighted average water level per point of time is derived referring to one reference location. For the estimation of water level height accuracies, at first, the formal errors are computed applying a full error propagation within Kalman filtering. Hereby, the precision of the input measurements are introduced by using the standard deviation of the water level height along the altimeter track. In addition to the resulting formal errors of water level heights, uncertainties of the applied geophysical correction (e.g. wet troposphere, ionosphere, etc.) and systematic error effects are taken into account to achieve more realistic error estimates. For validation of the time series, we

  13. Estimation of the extrapolation error in the calibration of type S thermocouples

    NASA Astrophysics Data System (ADS)

    Giorgio, P.; Garrity, K. M.; Rebagliati, M. Jiménez; García Skabar, J.

    2013-09-01

    Measurement results from the calibration performed at NIST of ten new type S thermocouples have been analyzed to estimate the extrapolation error. Thermocouples have been calibrated at the fixed points of Zn, Al, Ag and Au and calibration curves were calculated using different numbers of FPs. It was found for these thermocouples that the absolute value of the extrapolation error, evaluated by measurement at the Au freezing-point temperature, is at most 0.10 °C and 0.27 °C when the fixed-points of Zn, Al and Ag, or the fixed-points of Zn and Al, are respectively used to calculate the calibration curve. It is also shown that absolute value of the extrapolation error, evaluated by measurement at the Ag freezing-point temperature is at most 0.25 °C when the fixed-points of Zn and Al, are used to calculate the calibration curve. This study is oriented to help those labs that lack a direct mechanism to achieve a high temperature calibration. It supports, up to 1064 °C, the application of a similar procedure to that used by Burns and Scroger in NIST SP-250-35 for calibrating a new type S thermocouple. The uncertainty amounts a few tenths of a degree Celsius.

  14. Complex phase error and motion estimation in synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Soumekh, M.; Yang, H.

    1991-06-01

    Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.

  15. Geomechanical Analysis with Rigorous Error Estimates for a Double-Porosity Reservoir Model

    SciTech Connect

    Berryman, J G

    2005-04-11

    A model of random polycrystals of porous laminates is introduced to provide a means for studying geomechanical properties of double-porosity reservoirs. Calculations on the resulting earth reservoir model can proceed semi-analytically for studies of either the poroelastic or transport coefficients. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden (or presumed unknown) microstructure on the final results can then be evaluated quantitatively. Detailed descriptions of the use of the model and some numerical examples showing typical results for the double-porosity poroelastic coefficients of a heterogeneous reservoir are presented.

  16. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    SciTech Connect

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  17. Trends and Correlation Estimation in Climate Sciences: Effects of Timescale Errors

    NASA Astrophysics Data System (ADS)

    Mudelsee, M.; Bermejo, M. A.; Bickert, T.; Chirila, D.; Fohlmeister, J.; Köhler, P.; Lohmann, G.; Olafsdottir, K.; Scholz, D.

    2012-12-01

    Trend describes time-dependence in the first moment of a stochastic process, and correlation measures the linear relation between two random variables. Accurately estimating the trend and correlation, including uncertainties, from climate time series data in the uni- and bivariate domain, respectively, allows first-order insights into the geophysical process that generated the data. Timescale errors, ubiquitious in paleoclimatology, where archives are sampled for proxy measurements and dated, poses a problem to the estimation. Statistical science and the various applied research fields, including geophysics, have almost completely ignored this problem due to its theoretical almost-intractability. However, computational adaptations or replacements of traditional error formulas have become technically feasible. This contribution gives a short overview of such an adaptation package, bootstrap resampling combined with parametric timescale simulation. We study linear regression, parametric change-point models and nonparametric smoothing for trend estimation. We introduce pairwise-moving block bootstrap resampling for correlation estimation. Both methods share robustness against autocorrelation and non-Gaussian distributional shape. We shortly touch computing-intensive calibration of bootstrap confidence intervals and consider options to parallelize the related computer code. Following examples serve not only to illustrate the methods but tell own climate stories: (1) the search for climate drivers of the Agulhas Current on recent timescales, (2) the comparison of three stalagmite-based proxy series of regional, western German climate over the later part of the Holocene, and (3) trends and transitions in benthic oxygen isotope time series from the Cenozoic. Financial support by Deutsche Forschungsgemeinschaft (FOR 668, FOR 1070, MU 1595/4-1) and the European Commission (MC ITN 238512, MC ITN 289447) is acknowledged.

  18. Laboratory measurement error in external dose estimates and its effects on dose-response analyses of Hanford worker mortality data

    SciTech Connect

    Gilbert, E.S.; Fix, J.J.

    1996-08-01

    This report addresses laboratory measurement error in estimates of external doses obtained from personnel dosimeters, and investigates the effects of these errors on linear dose-response analyses of data from epidemiologic studies of nuclear workers. These errors have the distinguishing feature that they are independent across time and across workers. Although the calculations made for this report were based on Hanford data, the overall conclusions are likely to be relevant for other epidemiologic studies of workers exposed to external radiation.

  19. Integration of rain gauge measurement errors with the overall rainfall uncertainty estimation using kriging methods

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Moreno Ródenas, Antonio Manuel; Rico-Ramirez, Miguel Angel; ten Veldhuis, Marie-claire; Han, Dawei

    2016-04-01

    In many research studies rain gauges are used as a reference point measurement for rainfall, because they can reach very good accuracy, especially compared to radar or microwave links, and their use is very widespread. In some applications rain gauge uncertainty is assumed to be small enough to be neglected. This can be done when rain gauges are accurate and their data is correctly managed. Unfortunately, in many operational networks the importance of accurate rainfall data and of data quality control can be underestimated; budget and best practice knowledge can be limiting factors in a correct rain gauge network management. In these cases, the accuracy of rain gauges can drastically drop and the uncertainty associated with the measurements cannot be neglected. This work proposes an approach based on three different kriging methods to integrate rain gauge measurement errors in the overall rainfall uncertainty estimation. In particular, rainfall products of different complexity are derived through 1) block kriging on a single rain gauge 2) ordinary kriging on a network of different rain gauges 3) kriging with external drift to integrate all the available rain gauges with radar rainfall information. The study area is the Eindhoven catchment, contributing to the river Dommel, in the southern part of the Netherlands. The area, 590 km2, is covered by high quality rain gauge measurements by the Royal Netherlands Meteorological Institute (KNMI), which has one rain gauge inside the study area and six around it, and by lower quality rain gauge measurements by the Dommel Water Board and by the Eindhoven Municipality (six rain gauges in total). The integration of the rain gauge measurement error is accomplished in all the cases increasing the nugget of the semivariogram proportionally to the estimated error. Using different semivariogram models for the different networks allows for the separate characterisation of higher and lower quality rain gauges. For the kriging with

  20. Systematization of problems on ball estimates of a convex compactum

    NASA Astrophysics Data System (ADS)

    Dudov, S. I.

    2015-09-01

    We consider a class of finite-dimensional problems on the estimation of a convex compactum by a ball of an arbitrary norm in the form of extremal problems whose goal function is expressed via the function of the distance to the farthest point of the compactum and the function of the distance to the nearest point of the compactum or its complement. Special attention is devoted to the problem of estimating (approximating) a convex compactum by a ball of fixed radius in the Hausdorff metric. It is proved that this problem plays the role of the canonical problem: solutions of any problem in the class under consideration can be expressed via solutions of this problem for certain values of the radius. Based on studying and using the properties of solutions of this canonical problem, we obtain ranges of values of the radius in which the canonical problem expresses solutions of the problems on inscribed and circumscribed balls, the problem of uniform estimate by a ball in the Hausdorff metric, the problem of asphericity of a convex body, the problems of spherical shells of the least thickness and of the least volume for the boundary of a convex body. This makes it possible to arrange the problems in increasing order of the corresponding values of the radius. Bibliography: 34 titles.

  1. Assessment and Calibration of Ultrasonic Measurement Errors in Estimating Weathering Index of Stone Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Lee, Y.; Keehm, Y.

    2011-12-01

    Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by

  2. Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.

    PubMed

    Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R

    2002-06-01

    We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted.

  3. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    SciTech Connect

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  4. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  5. Uncertainty estimation in form error evaluation of freeform surfaces for precision metrology

    NASA Astrophysics Data System (ADS)

    Zhang, Xiangchao; Xiao, Hong; Zhang, Hao; He, Xiaoying; Xu, Min

    2016-01-01

    Freeform surfaces are widely used in precision components to realize novel functionalities. In order to evaluate the form qualities of the manufactured freeform parts, surface matching/fitting is required. The uncertainty of the obtained form deviations needs to be estimated to assess the reliability of form error evaluation. The GUM approach is extensively adopted for uncertainty assessment in precision metrology, but it is not suited for assessing the nonlinear matching/fitting problems of freeform models. In this paper a Monte-Carlo method is developed to estimate the uncertainty of the fitted position, shape and form error metrics. Based on the correlation analysis, the effects of objective functions in numerical optimization, noise amplitudes in measurement, shapes of freeform surfaces and so on are determined. Then the significant factors dominating the reliability of the fitted results can be identified. Henceforth the matching/fitting procedures can be arranged appropriately to reduce the uncertainty of the evaluation results and improve the reliability of freeform surface characterization.

  6. Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans.

    PubMed

    Racimo, Fernando; Renaud, Gabriel; Slatkin, Montgomery

    2016-04-01

    When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s). Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters-including drift times and admixture rates-for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African). The method is implemented in a C++ program called 'Demographic Inference with Contamination and Error' (DICE). We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X), we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%.

  7. Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans.

    PubMed

    Racimo, Fernando; Renaud, Gabriel; Slatkin, Montgomery

    2016-04-01

    When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s). Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters-including drift times and admixture rates-for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African). The method is implemented in a C++ program called 'Demographic Inference with Contamination and Error' (DICE). We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X), we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%. PMID:27049965

  8. Diagnostic and therapeutic errors in trigeminal autonomic cephalalgias and hemicrania continua: a systematic review.

    PubMed

    Viana, Michele; Tassorelli, Cristina; Allena, Marta; Nappi, Giuseppe; Sjaastad, Ottar; Antonaci, Fabio

    2013-02-18

    Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients.

  9. The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.

    PubMed

    Nash, Ulrik W

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  10. The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds

    PubMed Central

    Nash, Ulrik W.

    2014-01-01

    Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078

  11. Standard error in the Jacobson and Truax Reliable Change Index: the "classical approach" leads to poor estimates.

    PubMed

    Temkin, Nancy R

    2004-10-01

    Different authors have used different estimates of variability in the denominator of the Reliable Change Index (RCI). Maassen attempts to clarify some of the differences and the assumptions underlying them. In particular he compares the 'classical' approach using an estimate S(Ed) supposedly based on measurement error alone with an estimate S(Diff) based on the variability of observed differences in a population that should have no true change. Maassen concludes that not only is S(Ed) based on classical theory, but it properly estimates variability due to measurement error and practice effect while S(Diff) overestimates variability by accounting twice for the variability due to practice. Simulations show Maassen to be wrong on both accounts. With an error rate nominally set to 10%, RCI estimates using S(Diff) wrongly declare change in 10.4% and 9.4% of simulated cases without true change while estimates using S(Ed) wrongly declare change in 17.5% and 12.3% of the simulated cases (p < .000000001 and p < .008, respectively). In the simulation that separates measurement error and practice effects, SEd estimates the variability of change due to measurement error to be .34, when the true variability due to measurement error was .014. Neuropsychologists should not use SEd in the denominator of the RCI. PMID:15637781

  12. A Comparison of Item Parameter Standard Error Estimation Procedures for Unidimensional and Multidimensional Item Response Theory Modeling

    ERIC Educational Resources Information Center

    Paek, Insu; Cai, Li

    2014-01-01

    The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…

  13. Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

  14. Error in estimation of rate and time inferred from the early amniote fossil record and avian molecular clocks.

    PubMed

    van Tuinen, Marcel; Hadly, Elizabeth A

    2004-08-01

    The best reconstructions of the history of life will use both molecular time estimates and fossil data. Errors in molecular rate estimation typically are unaccounted for and no attempts have been made to quantify this uncertainty comprehensively. Here, focus is primarily on fossil calibration error because this error is least well understood and nearly universally disregarded. Our quantification of errors in the synapsid-diapsid calibration illustrates that although some error can derive from geological dating of sedimentary rocks, the absence of good stem fossils makes phylogenetic error the most critical. We therefore propose the use of calibration ages that are based on the first undisputed synapsid and diapsid. This approach yields minimum age estimates and standard errors of 306.1 +/- 8.5 MYR for the divergence leading to birds and mammals. Because this upper bound overlaps with the recent use of 310 MYR, we do not support the notion that several metazoan divergence times are significantly overestimated because of serious miscalibration (sensuLee 1999). However, the propagation of relevant errors reduces the statistical significance of the pre-K-T boundary diversification of many bird lineages despite retaining similar point time estimates. Our results demand renewed investigation into suitable loci and fossil calibrations for constructing evolutionary timescales.

  15. Estimated global incidence of Japanese encephalitis: a systematic review

    PubMed Central

    Campbell, Grant L; Hills, Susan L; Fischer, Marc; Jacobson, Julie A; Hoke, Charles H; Hombach, Joachim M; Marfin, Anthony A; Solomon, Tom; Tsai, Theodore F; Tsu, Vivien D

    2011-01-01

    Abstract Objective To update the estimated global incidence of Japanese encephalitis (JE) using recent data for the purpose of guiding prevention and control efforts. Methods Thirty-two areas endemic for JE in 24 Asian and Western Pacific countries were sorted into 10 incidence groups on the basis of published data and expert opinion. Population-based surveillance studies using laboratory-confirmed cases were sought for each incidence group by a computerized search of the scientific literature. When no eligible studies existed for a particular incidence group, incidence data were extrapolated from related groups. Findings A total of 12 eligible studies representing 7 of 10 incidence groups in 24 JE-endemic countries were identified. Approximately 67 900 JE cases typically occur annually (overall incidence: 1.8 per 100 000), of which only about 10% are reported to the World Health Organization. Approximately 33 900 (50%) of these cases occur in China (excluding Taiwan) and approximately 51 000 (75%) occur in children aged 0–14 years (incidence: 5.4 per 100 000). Approximately 55 000 (81%) cases occur in areas with well established or developing JE vaccination programmes, while approximately 12 900 (19%) occur in areas with minimal or no JE vaccination programmes. Conclusion Recent data allowed us to refine the estimate of the global incidence of JE, which remains substantial despite improvements in vaccination coverage. More and better incidence studies in selected countries, particularly China and India, are needed to further refine these estimates. PMID:22084515

  16. Application of parameter estimation to aircraft stability and control: The output-error approach

    NASA Technical Reports Server (NTRS)

    Maine, Richard E.; Iliff, Kenneth W.

    1986-01-01

    The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.

  17. Errors in the estimation of the variance: implications for multiple-probability fluctuation analysis.

    PubMed

    Saviane, Chiara; Silver, R Angus

    2006-06-15

    Synapses play a crucial role in information processing in the brain. Amplitude fluctuations of synaptic responses can be used to extract information about the mechanisms underlying synaptic transmission and its modulation. In particular, multiple-probability fluctuation analysis can be used to estimate the number of functional release sites, the mean probability of release and the amplitude of the mean quantal response from fits of the relationship between the variance and mean amplitude of postsynaptic responses, recorded at different probabilities. To determine these quantal parameters, calculate their uncertainties and the goodness-of-fit of the model, it is important to weight the contribution of each data point in the fitting procedure. We therefore investigated the errors associated with measuring the variance by determining the best estimators of the variance of the variance and have used simulations of synaptic transmission to test their accuracy and reliability under different experimental conditions. For central synapses, which generally have a low number of release sites, the amplitude distribution of synaptic responses is not normal, thus the use of a theoretical variance of the variance based on the normal assumption is not a good approximation. However, appropriate estimators can be derived for the population and for limited sample sizes using a more general expression that involves higher moments and introducing unbiased estimators based on the h-statistics. Our results are likely to be relevant for various applications of fluctuation analysis when few channels or release sites are present.

  18. Estimation of adequate setup margins and threshold for position errors requiring immediate attention in head and neck cancer radiotherapy based on 2D image guidance

    PubMed Central

    2013-01-01

    Background We estimated sufficient setup margins for head-and-neck cancer (HNC) radiotherapy (RT) when 2D kV images are utilized for routine patient setup verification. As another goal we estimated a threshold for the displacements of the most important bony landmarks related to the target volumes requiring immediate attention. Methods We analyzed 1491 orthogonal x-ray images utilized in RT treatment guidance for 80 HNC patients. We estimated overall setup errors and errors for four subregions to account for patient rotation and deformation: the vertebrae C1-2, C5-7, the occiput bone and the mandible. Setup margins were estimated for two 2D image guidance protocols: i) imaging at first three fractions and weekly thereafter and ii) daily imaging. Two 2D image matching principles were investigated: i) to the vertebrae in the middle of planning target volume (PTV) (MID_PTV) and ii) minimizing maximal position error for the four subregions (MIN_MAX). The threshold for the position errors was calculated with two previously unpublished methods based on the van Herk’s formula and clinical data by retaining a margin of 5 mm sufficient for each subregion. Results Sufficient setup margins to compensate the displacements of the subregions were approximately two times larger than were needed to compensate setup errors for rigid target. Adequate margins varied from 2.7 mm to 9.6 mm depending on the subregions related to the target, applied image guidance protocol and early correction of clinically important systematic 3D displacements of the subregions exceeding 4 mm. The MIN_MAX match resulted in smaller margins but caused an overall shift of 2.5 mm for the target center. Margins ≤ 5mm were sufficient with the MID_PTV match only through application of daily 2D imaging and the threshold of 4 mm to correct systematic displacement of a subregion. Conclusions Adequate setup margins depend remarkably on the subregions related to the target volume. When the systematic 3D

  19. A review of sources of systematic errors and uncertainties in observations and simulations at 183 GHz

    NASA Astrophysics Data System (ADS)

    Brogniez, Helene; English, Stephen; Mahfouf, Jean-Francois; Behrendt, Andreas; Berg, Wesley; Boukabara, Sid; Buehler, Stefan Alexander; Chambon, Philippe; Gambacorta, Antonia; Geer, Alan; Ingram, William; Kursinski, E. Robert; Matricardi, Marco; Odintsova, Tatyana A.; Payne, Vivienne H.; Thorne, Peter W.; Tretyakov, Mikhail Yu.; Wang, Junhong

    2016-05-01

    Several recent studies have observed systematic differences between measurements in the 183.31 GHz water vapor line by space-borne sounders and calculations using radiative transfer models, with inputs from either radiosondes (radiosonde observations, RAOBs) or short-range forecasts by numerical weather prediction (NWP) models. This paper discusses all the relevant categories of observation-based or model-based data, quantifies their uncertainties and separates biases that could be common to all causes from those attributable to a particular cause. Reference observations from radiosondes, Global Navigation Satellite System (GNSS) receivers, differential absorption lidar (DIAL) and Raman lidar are thus overviewed. Biases arising from their calibration procedures, NWP models and data assimilation, instrument biases and radiative transfer models (both the models themselves and the underlying spectroscopy) are presented and discussed. Although presently no single process in the comparisons seems capable of explaining the observed structure of bias, recommendations are made in order to better understand the causes.

  20. Systematic identification and correction of annotation errors in the genetic interaction map of Saccharomyces cerevisiae

    PubMed Central

    Atias, Nir; Kupiec, Martin; Sharan, Roded

    2016-01-01

    The yeast mutant collections are a fundamental tool in deciphering genomic organization and function. Over the last decade, they have been used for the systematic exploration of ∼6 000 000 double gene mutants, identifying and cataloging genetic interactions among them. Here we studied the extent to which these data are prone to neighboring gene effects (NGEs), a phenomenon by which the deletion of a gene affects the expression of adjacent genes along the genome. Analyzing ∼90,000 negative genetic interactions observed to date, we found that more than 10% of them are incorrectly annotated due to NGEs. We developed a novel algorithm, GINGER, to identify and correct erroneous interaction annotations. We validated the algorithm using a comparative analysis of interactions from Schizosaccharomyces pombe. We further showed that our predictions are significantly more concordant with diverse biological data compared to their mis-annotated counterparts. Our work uncovered about 9500 new genetic interactions in yeast. PMID:26602688

  1. A Novel, Physics-Based Data Analytics Framework for Reducing Systematic Model Errors

    NASA Astrophysics Data System (ADS)

    Wu, W.; Liu, Y.; Vandenberghe, F. C.; Knievel, J. C.; Hacker, J.

    2015-12-01

    Most climate and weather models exhibit systematic biases, such as under predicted diurnal temperatures in the WRF (Weather Research and Forecasting) model. General approaches to alleviate the systematic biases include improving model physics and numerics, improving data assimilation, and bias correction through post-processing. In this study, we developed a novel, physics-based data analytics framework in post processing by taking advantage of ever-growing high-resolution (spatial and temporal) observational and modeling data. In the framework, a spatiotemporal PCA (Principal Component Analysis) is first applied on the observational data to filter out noise and information on scales that a model may not be able to resolve. The filtered observations are then used to establish regression relationships with archived model forecasts in the same spatiotemporal domain. The regressions along with the model forecasts predict the projected observations in the forecasting period. The pre-regression PCA procedure strengthens regressions, and enhances predictive skills. We then combine the projected observations with the past observations to apply PCA iteratively to derive the final forecasts. This post-regression PCA reconstructs variances and scales of information that are lost in the regression. The framework was examined and validated with 24 days of 5-minute observational data and archives from the WRF model at 27 stations near Dugway Proving Ground, Utah. The validation shows significant bias reduction in the diurnal cycle of predicted surface air temperature compared to the direct output from the WRF model. Additionally, unlike other post-processing bias correction schemes, the data analytics framework does not require long-term historic data and model archives. A week or two of the data is enough to take into account changes in weather regimes. The program, written in python, is also computationally efficient.

  2. An online model correction method based on an inverse problem: Part I—Model error estimation by iteration

    NASA Astrophysics Data System (ADS)

    Xue, Haile; Shen, Xueshun; Chou, Jifan

    2015-10-01

    Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.

  3. Error and bias in size estimates of whale sharks: implications for understanding demography

    PubMed Central

    Sequeira, Ana M. M.; Thums, Michele; Brooks, Kim; Meekan, Mark G.

    2016-01-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species. PMID:27069656

  4. Error and bias in size estimates of whale sharks: implications for understanding demography.

    PubMed

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species. PMID:27069656

  5. Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.

    PubMed

    Korth, Martin

    2013-10-14

    The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic. PMID:23963227

  6. Error and bias in size estimates of whale sharks: implications for understanding demography.

    PubMed

    Sequeira, Ana M M; Thums, Michele; Brooks, Kim; Meekan, Mark G

    2016-03-01

    Body size and age at maturity are indicative of the vulnerability of a species to extinction. However, they are both difficult to estimate for large animals that cannot be restrained for measurement. For very large species such as whale sharks, body size is commonly estimated visually, potentially resulting in the addition of errors and bias. Here, we investigate the errors and bias associated with total lengths of whale sharks estimated visually by comparing them with measurements collected using a stereo-video camera system at Ningaloo Reef, Western Australia. Using linear mixed-effects models, we found that visual lengths were biased towards underestimation with increasing size of the shark. When using the stereo-video camera, the number of larger individuals that were possibly mature (or close to maturity) that were detected increased by approximately 10%. Mean lengths calculated by each method were, however, comparable (5.002 ± 1.194 and 6.128 ± 1.609 m, s.d.), confirming that the population at Ningaloo is mostly composed of immature sharks based on published lengths at maturity. We then collated data sets of total lengths sampled from aggregations of whale sharks worldwide between 1995 and 2013. Except for locations in the East Pacific where large females have been reported, these aggregations also largely consisted of juveniles (mean lengths less than 7 m). Sightings of the largest individuals were limited and occurred mostly prior to 2006. This result highlights the urgent need to locate and quantify the numbers of mature male and female whale sharks in order to ascertain the conservation status and ensure persistence of the species.

  7. SU-E-T-405: Robustness of Volumetric-Modulated Arc Therapy (VMAT) Plans to Systematic MLC Positional Errors

    SciTech Connect

    Qi, P; Xia, P

    2014-06-01

    Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2–4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 ± 2.2%, 97.9 ± 2.1%, 90.1 ± 9.0% for HN cases and 99.5 ± 3.2%, 98.9 ± 1.0%, 97.0 ± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 ± 1.5% for HN cases and 99.7 ± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 ± 5.3% for HN cases and 83.4 ± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.

  8. Systematic afterpulsing-estimation algorithms for gated avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Wiechers, Carlos; Ramírez-Alarcón, Roberto; Muñiz-Sánchez, Oscar R.; Yépiz, Pablo Daniel; Arredondo-Santos, Alejandro; Hirsch, Jorge G.; U'Ren, Alfred B.

    2016-09-01

    We present a method designed to efficiently extract optical signals from InGaAs avalanche photodiodes (APDs) operated in gated mode. In particular, our method permits an estimation of the fraction of counts which actually results from the signal being measured, as opposed to being produced by noise mechanisms, specifically by afterpulsing. Our method in principle allows the use of InGaAs APDs at high detection efficiencies, with the full operation bandwidth, either with or without resorting to the application of a dead time. As we show below, our method can be used in configurations where afterpulsing exceeds the genuine signal by orders of magnitude, even near saturation. The algorithms which we have developed are suitable to be used either in real-time processing of raw detection probabilities or in post-processing applications, after a calibration step has been performed. The algorithms which we propose here can complement technologies designed for the reduction of afterpulsing.

  9. Systematic afterpulsing-estimation algorithms for gated avalanche photodiodes.

    PubMed

    Wiechers, Carlos; Ramírez-Alarcón, Roberto; Muñiz-Sánchez, Oscar R; Yépiz, Pablo Daniel; Arredondo-Santos, Alejandro; Hirsch, Jorge G; U'Ren, Alfred B

    2016-09-10

    We present a method designed to efficiently extract optical signals from InGaAs avalanche photodiodes (APDs) operated in gated mode. In particular, our method permits an estimation of the fraction of counts that actually results from the signal being measured, as opposed to being produced by noise mechanisms, specifically by afterpulsing. Our method in principle allows the use of InGaAs APDs at high detection efficiencies, with the full operation bandwidth, either with or without resorting to the application of a dead-time. As we show below, our method can be used in configurations where afterpulsing exceeds the genuine signal by orders of magnitude, even near saturation. The algorithms that we have developed are suitable to be used either in real-time processing of raw detection probabilities or in post-processing applications, after a calibration step has been performed. The algorithms that we propose here can complement technologies designed for the reduction of afterpulsing. PMID:27661361

  10. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    PubMed

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  11. Refractive Error and Risk of Early or Late Age-Related Macular Degeneration: A Systematic Review and Meta-Analysis

    PubMed Central

    Li, Ying; Wang, JiWen; Zhong, XiaoJing; Tian, Zhen; Wu, Peipei; Zhao, Wenbo; Jin, Chenjin

    2014-01-01

    Objective To summarize relevant evidence investigating the associations between refractive error and age-related macular degeneration (AMD). Design Systematic review and meta-analysis. Methods We searched Medline, Web of Science, and Cochrane databases as well as the reference lists of retrieved articles to identify studies that met the inclusion criteria. Extracted data were combined using a random-effects meta-analysis. Studies that were pertinent to our topic but did not meet the criteria for quantitative analysis were reported in a systematic review instead. Main outcome measures Pooled odds ratios (ORs) and 95% confidence intervals (CIs) for the associations between refractive error (hyperopia, myopia, per-diopter increase in spherical equivalent [SE] toward hyperopia, per-millimeter increase in axial length [AL]) and AMD (early and late, prevalent and incident). Results Fourteen studies comprising over 5800 patients were eligible. Significant associations were found between hyperopia, myopia, per-diopter increase in SE, per-millimeter increase in AL, and prevalent early AMD. The pooled ORs and 95% CIs were 1.13 (1.06–1.20), 0.75 (0.56–0.94), 1.10 (1.07–1.14), and 0.79 (0.73–0.85), respectively. The per-diopter increase in SE was also significantly associated with early AMD incidence (OR, 1.06; 95% CI, 1.02–1.10). However, no significant association was found between hyperopia or myopia and early AMD incidence. Furthermore, neither prevalent nor incident late AMD was associated with refractive error. Considerable heterogeneity was found among studies investigating the association between myopia and prevalent early AMD (P = 0.001, I2 = 72.2%). Geographic location might play a role; the heterogeneity became non-significant after stratifying these studies into Asian and non-Asian subgroups. Conclusion Refractive error is associated with early AMD but not with late AMD. More large-scale longitudinal studies are needed to further investigate such

  12. Intermediate-mass-ratio inspirals in the Einstein Telescope. II. Parameter estimation errors

    SciTech Connect

    Huerta, E. A.; Gair, Jonathan R.

    2011-02-15

    We explore the precision with which the Einstein Telescope will be able to measure the parameters of intermediate-mass-ratio inspirals, i.e., the inspirals of stellar mass compact objects into intermediate-mass black holes (IMBHs). We calculate the parameter estimation errors using the Fisher Matrix formalism and present results of Monte Carlo simulations of these errors over choices for the extrinsic parameters of the source. These results are obtained using two different models for the gravitational waveform which were introduced in paper I of this series. These two waveform models include the inspiral, merger, and ringdown phases in a consistent way. One of the models, based on the transition scheme of Ori and Thorne [A. Ori and K. S. Thorne, Phys. Rev. D 62, 124022 (2000)], is valid for IMBHs of arbitrary spin; whereas, the second model, based on the effective-one-body approach, has been developed to cross-check our results in the nonspinning limit. In paper I of this series, we demonstrated the excellent agreement in both phase and amplitude between these two models for nonspinning black holes, and that their predictions for signal-to-noise ratios are consistent to within 10%. We now use these waveform models to estimate parameter estimation errors for binary systems with masses 1.4M{sub {circle_dot}}+100M{sub {circle_dot}}, 10M{sub {circle_dot}}+100M{sub {circle_dot}}, 1.4M{sub {circle_dot}}+500M{sub {circle_dot}}, and 10M{sub {circle_dot}}+500M{sub {circle_dot}} and various choices for the spin of the central IMBH. Assuming a detector network of three Einstein Telescopes, the analysis shows that for a 10M{sub {circle_dot}} compact object inspiralling into a 100M{sub {circle_dot}} IMBH with spin q=0.3, detected with a signal-to-noise ratio of 30, we should be able to determine the compact object and IMBH masses, and the IMBH spin magnitude to fractional accuracies of {approx}10{sup -3}, {approx}10{sup -3.5}, and {approx}10{sup -3}, respectively. We also

  13. Evaluation of the ability of a 2D ionisation chamber array and an EPID to detect systematic delivery errors in IMRT plans

    NASA Astrophysics Data System (ADS)

    Bawazeer, Omemh; Gray, Alison; Arumugam, Sankar; Vial, Philip; Thwaites, David; Descallar, Joseph; Holloway, Lois

    2014-03-01

    Two clinical intensity modulated radiotherapy plans were selected. Eleven plan variations were created with systematic errors introduced: Multi-Leaf Collimator (MLC) positional errors with all leaf pairs shifted in the same or the opposite direction, and collimator rotation offsets. Plans were measured using an Electronic Portal Imaging Device (EPID) and an ionisation chamber array. The plans were evaluated using gamma analysis with different criteria. The gamma pass rates remained around 95% or higher for most cases with MLC positional errors of 1 mm and 2 mm with 3%/3mm criteria. The ability of both devices to detect delivery errors was similar.

  14. The focus-to-detector distance as a source of systematical errors in the measurement of Chaoul therapy units.

    PubMed

    Zaránd, P

    1980-09-01

    The skin exposure rates measured on 22 Chaoul units in two consecutive years were compared and their variance was analysed. The statistical fluctuation of the ionization method was 3.1% by a factor of about 2 to 2.5 smaller than the variations due to lack of reproducibility of the Chaoul units. The authors observed systematical errors among exposure rate measurement performed at different focus-to-detector distances. The effective source-to-detector distance is different for various ionization chambers. It is the sum of nominal focus-to-detector distance plus a geometrical constant. The geometrical constant is for a particular chamber only to a small extent dependent on the front wall thickness and on the focus-to detector distance. Sufficient standardization of both calibration procedure and construction of ionization chambers may help in avoiding this effect.

  15. Stacked Weak Lensing Mass Calibration: Estimators, Systematics, and Impact on Cosmological Parameter Constraints

    SciTech Connect

    Rozo, Eduardo; Wu, Hao-Yi; Schmidt, Fabian; /Caltech