Sampling of systematic errors to estimate likelihood weights in nuclear data uncertainty propagation
NASA Astrophysics Data System (ADS)
Helgesson, P.; Sjöstrand, H.; Koning, A. J.; Rydén, J.; Rochman, D.; Alhassan, E.; Pomp, S.
2016-01-01
In methodologies for nuclear data (ND) uncertainty assessment and propagation based on random sampling, likelihood weights can be used to infer experimental information into the distributions for the ND. As the included number of correlated experimental points grows large, the computational time for the matrix inversion involved in obtaining the likelihood can become a practical problem. There are also other problems related to the conventional computation of the likelihood, e.g., the assumption that all experimental uncertainties are Gaussian. In this study, a way to estimate the likelihood which avoids matrix inversion is investigated; instead, the experimental correlations are included by sampling of systematic errors. It is shown that the model underlying the sampling methodology (using univariate normal distributions for random and systematic errors) implies a multivariate Gaussian for the experimental points (i.e., the conventional model). It is also shown that the likelihood estimates obtained through sampling of systematic errors approach the likelihood obtained with matrix inversion as the sample size for the systematic errors grows large. In studied practical cases, it is seen that the estimates for the likelihood weights converge impractically slowly with the sample size, compared to matrix inversion. The computational time is estimated to be greater than for matrix inversion in cases with more experimental points, too. Hence, the sampling of systematic errors has little potential to compete with matrix inversion in cases where the latter is applicable. Nevertheless, the underlying model and the likelihood estimates can be easier to intuitively interpret than the conventional model and the likelihood function involving the inverted covariance matrix. Therefore, this work can both have pedagogical value and be used to help motivating the conventional assumption of a multivariate Gaussian for experimental data. The sampling of systematic errors could also be used in cases where the experimental uncertainties are not Gaussian, and for other purposes than to compute the likelihood, e.g., to produce random experimental data sets for a more direct use in ND evaluation.
Glosup, J.G.; Axelrod, M.C.
1996-08-05
The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.
Statistical uncertainties and systematic errors in weak lensing mass estimates of galaxy clusters
NASA Astrophysics Data System (ADS)
Köhlinger, F.; Hoekstra, H.; Eriksen, M.
2015-11-01
Upcoming and ongoing large area weak lensing surveys will also discover large samples of galaxy clusters. Accurate and precise masses of galaxy clusters are of major importance for cosmology, for example, in establishing well-calibrated observational halo mass functions for comparison with cosmological predictions. We investigate the level of statistical uncertainties and sources of systematic errors expected for weak lensing mass estimates. Future surveys that will cover large areas on the sky, such as Euclid or LSST and to lesser extent DES, will provide the largest weak lensing cluster samples with the lowest level of statistical noise regarding ensembles of galaxy clusters. However, the expected low level of statistical uncertainties requires us to scrutinize various sources of systematic errors. In particular, we investigate the bias due to cluster member galaxies which are erroneously treated as background source galaxies due to wrongly assigned photometric redshifts. We find that this effect is significant when referring to stacks of galaxy clusters. Finally, we study the bias due to miscentring, i.e. the displacement between any observationally defined cluster centre and the true minimum of its gravitational potential. The impact of this bias might be significant with respect to the statistical uncertainties. However, complementary future missions such as eROSITA will allow us to define stringent priors on miscentring parameters which will mitigate this bias significantly.
Yang, Jun; Liang, Bin; Zhang, Tao; Song, Jingyan
2011-01-01
The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR) with Radial Basis Function (RBF) kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10?5 pixels. PMID:22164021
NASA Astrophysics Data System (ADS)
Littenberg, Tyson B.; Farr, Ben; Coughlin, Scott; Kalogera, Vicky
2016-03-01
Among the most eagerly anticipated opportunities made possible by Advanced LIGO/Virgo are multimessenger observations of compact mergers. Optical counterparts may be short-lived so rapid characterization of gravitational wave (GW) events is paramount for discovering electromagnetic signatures. One way to meet the demand for rapid GW parameter estimation is to trade off accuracy for speed, using waveform models with simplified treatment of the compact objectsâ€™ spin. We report on the systematic errors in GW parameter estimation suffered when using different spin approximations to recover generic signals. Component mass measurements can be biased by \\gt 5Ïƒ using simple-precession waveforms and in excess of 20Ïƒ when non-spinning templates are employed. This suggests that electromagnetic observing campaigns should not take a strict approach to selecting which LIGO/Virgo candidates warrant follow-up observations based on low-latency mass estimates. For sky localization, we find that searched areas are up to a factor of âˆ¼ 2 larger for non-spinning analyses, and are systematically larger for any of the simplified waveforms considered in our analysis. Distance biases for the non-precessing waveforms can be in excess of 100% and are largest when the spin angular momenta are in the orbital plane of the binary. We confirm that spin-aligned waveforms should be used for low-latency parameter estimation at the minimum. Including simple precession, though more computationally costly, mitigates biases except for signals with extreme precession effects. Our results shine a spotlight on the critical need for development of computationally inexpensive precessing waveforms and/or massively parallel algorithms for parameter estimation.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphael; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngole Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stephane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean -Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the SÃ©rsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methodsâ€™ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
NASA Astrophysics Data System (ADS)
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; Fenech Conti, Ian; Gavazzi, Raphaël; Gentile, Marc; Gill, Mandeep S. S.; Hogg, David W.; Huff, Eric M.; Jee, M. James; Kacprzak, Tomasz; Kilbinger, Martin; Kuntzer, Thibault; Lang, Dustin; Luo, Wentao; March, Marisa C.; Marshall, Philip J.; Meyers, Joshua E.; Miller, Lance; Miyatake, Hironao; Nakajima, Reiko; Ngolé Mboula, Fred Maurice; Nurbaeva, Guldariya; Okura, Yuki; Paulin-Henriksson, Stéphane; Rhodes, Jason; Schneider, Michael D.; Shan, Huanyuan; Sheldon, Erin S.; Simet, Melanie; Starck, Jean-Luc; Sureau, Florent; Tewes, Malte; Zarb Adami, Kristian; Zhang, Jun; Zuntz, Joe
2015-07-01
We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ˜1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods' results support the simple model in which additive shear biases depend linearly on PSF ellipticity.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; Bard, Deborah; Bertin, Emmanuel; Bosch, James; Boutigny, Dominique; Courbin, Frederic; Dawson, William A.; Donnarumma, Annamaria; et al
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmoreÂ Â» a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the SÃ©rsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methodsâ€™ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.Â«Â less
Estimation of Systematic Errors for Deuteron Electric Dipole Moment Search at COSY
NASA Astrophysics Data System (ADS)
Chekmenev, Stanislav
2016-02-01
An experimental method which is aimed to find a permanent EDM of a charged particle was proposed by the JEDI (JÃ¼lich Electric Dipole moment Investigations) collaboration. EDMs can be observed by their influence on spin motion. The only possible way to perform a direct measurement is to use a storage ring. For this purpose, it was decided to carry out the first precursor experiment at the Cooler Synchrotron (COSY). Since the EDM of a particle violates CP invariance it is expected to be tiny, treatment of all various sources of systematic errors should be done with a great level of precision. One should clearly understand how misalignments of the magnets affects the beam and the spin motion. It is planned to use a RF Wien filter for the precusor experiment. In this paper the simulations of the systematic effects for the RF Wien filter device method will be discussed.
Estimation of systematic errors in UHE CR energy reconstruction for ANITA-3 experiment
NASA Astrophysics Data System (ADS)
Bugaev, Viatcheslav; Rauch, Brian; Binns, Robert; Israel, Martin; Belov, Konstantin; Wissel, Stephanie; Romero-Wolf, Andres
2013-04-01
The third mission of the balloon-borne ANtarctic Impulsive Transient Antenna (ANITA-3) scheduled for December 2013 will be optimized for the measurement of impulsive radio signals from Ultra-High Energy Cosmic Rays (UHE CR), i.e. charged particles with energies above 10^19 eV, in addition to the neutrinos ANITA was originally designed for. The event reconstruction algorithm for UHE CR relies on the detection of radio emissions in the frequency range 200-1200 MHz (RF) produced by the charged component of Extensive Air Showers initiated by these particles. The UHE CR energy reconstruction method for ANITA is subject to systematic uncertainties introduced by models used in Monte Carlo simulations of RF. The presented study is aimed at evaluating these systematic uncertainties by comparing outputs of two RF simulation codes, CoREAS and ZHAireS, for different event statistics and propagating the differences in the outputs through the energy reconstruction method.
Decomposing model systematic error
NASA Astrophysics Data System (ADS)
Keenlyside, Noel; Shen, Mao-Lin
2014-05-01
Seasonal forecasts made with a single model are generally overconfident. The standard approach to improve forecast reliability is to account for structural uncertainties through a multi-model ensemble (i.e., an ensemble of opportunity). Here we analyse a multi-model set of seasonal forecasts available through ENSEMBLES and DEMETER EU projects. We partition forecast uncertainties into initial value and structural uncertainties, as function of lead-time and region. Statistical analysis is used to investigate sources of initial condition uncertainty, and which regions and variables lead to the largest forecast error. Similar analysis is then performed to identify common elements of model error. Results of this analysis will be used to discuss possibilities to reduce forecast uncertainty and improve models. In particular, better understanding of error growth will be useful for the design of interactive multi-model ensembles.
Evaluation of Data with Systematic Errors
Froehner, F. H.
2003-11-15
Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward.
Alonso-CarnÃ©, Jorge; GarcÃa-MartÃn, Alberto; Estrada-PeÃ±a, Agustin
2013-11-01
The modelling of habitat suitability for parasites is a growing area of research due to its association with climate change and ensuing shifts in the distribution of infectious diseases. Such models depend on remote sensing data and require accurate, high-resolution temperature measurements. The temperature is critical for accurate estimation of development rates and potential habitat ranges for a given parasite. The MODIS sensors aboard the Aqua and Terra satellites provide high-resolution temperature data for remote sensing applications. This paper describes comparative analysis of MODIS-derived temperatures relative to ground records of surface temperature in the western Palaearctic. The results show that MODIS overestimated maximum temperature values and underestimated minimum temperatures by up to 5-6 Â°C. The combined use of both Aqua and Terra datasets provided the most accurate temperature estimates around latitude 35-44Â° N, with an overestimation during spring-summer months and an underestimation in autumn-winter. Errors in temperature estimation were associated with specific ecological regions within the target area as well as technical limitations in the temporal and orbital coverage of the satellites (e.g. sensor limitations and satellite transit times). We estimated error propagation of temperature uncertainties in parasite habitat suitability models by comparing outcomes of published models. Error estimates reached 36% of annual respective measurements depending on the model used. Our analysis demonstrates the importance of adequate image processing and points out the limitations of MODIS temperature data as inputs into predictive models concerning parasite lifecycles. PMID:24258878
Klenzing, J. H.; Earle, G. D.; Heelis, R. A.; Coley, W. R.
2009-05-15
The use of biased grids as energy filters for charged particles is common in satellite-borne instruments such as a planar retarding potential analyzer (RPA). Planar RPAs are currently flown on missions such as the Communications/Navigation Outage Forecast System and the Defense Meteorological Satellites Program to obtain estimates of geophysical parameters including ion velocity and temperature. It has been shown previously that the use of biased grids in such instruments creates a nonuniform potential in the grid plane, which leads to inherent errors in the inferred parameters. A simulation of ion interactions with various configurations of biased grids has been developed using a commercial finite-element analysis software package. Using a statistical approach, the simulation calculates collected flux from Maxwellian ion distributions with three-dimensional drift relative to the instrument. Perturbations in the performance of flight instrumentation relative to expectations from the idealized RPA flux equation are discussed. Both single grid and dual-grid systems are modeled to investigate design considerations. Relative errors in the inferred parameters for each geometry are characterized as functions of ion temperature and drift velocity.
Simulation of Systematic Errors in Phase-Referenced VLBI Astrometry
NASA Astrophysics Data System (ADS)
Pradel, N.; Charlot, P.; Lestrade, J.-F.
2005-12-01
The astrometric accuracy in the relative coordinates of two angularly-close radio sources observed with the phase-referencing VLBI technique is limited by systematic errors. These include geometric errors and atmospheric errors. Based on simulation with the SPRINT software, we evaluate the impact of these errors in the estimated relative source coordinates for standard VLBA observations. Such evaluations are useful to estimate the actual accuracy of phase-referenced VLBI astrometry.
Suppressing systematic control errors to high orders
NASA Astrophysics Data System (ADS)
Bažant, P.; Frydrych, H.; Alber, G.; Jex, I.
2015-08-01
Dynamical decoupling is a powerful method for protecting quantum information against unwanted interactions with the help of open-loop control pulses. Realistic control pulses are not ideal and may introduce additional systematic errors. We introduce a class of self-stabilizing pulse sequences capable of suppressing such systematic control errors efficiently in qubit systems. Embedding already known decoupling sequences into these self-stabilizing sequences offers powerful means to achieve robustness against unwanted external perturbations and systematic control errors. As these self-stabilizing sequences are based on single-qubit operations, they offer interesting perspectives for future applications in quantum information processing.
Error correction in adders using systematic subcodes.
NASA Technical Reports Server (NTRS)
Rao, T. R. N.
1972-01-01
A generalized theory is presented for the construction of a systematic subcode for a given AN code in such a way that error control properties of the AN code are preserved in this new code. The 'systematic weight' and 'systematic distance' functions in this new code depend not only on its number representation system but also on its addition structure. Finally, to illustrate this theory, a simple error-correcting adder organization using a systematic subcode of 29 N code is sketched in some detail.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting softwareâ€¦
Numerical Error Estimation with UQ
NASA Astrophysics Data System (ADS)
Ackmann, Jan; Korn, Peter; Marotzke, Jochem
2014-05-01
Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted
Treatment of systematic errors in land data assimilation systems
Technology Transfer Automated Retrieval System (TEKTRAN)
Data assimilation systems are generally designed to minimize the influence of random error on the estimation of system states. Yet, experience with land data assimilation systems has also revealed the presence of large systematic differences between model-derived and remotely-sensed estimates of lan...
Antenna pointing systematic error model derivations
NASA Technical Reports Server (NTRS)
Guiar, C. N.; Lansing, F. L.; Riggs, R.
1987-01-01
The pointing model used to represent and correct systematic errors for the Deep Space Network (DSN) antennas is presented. Analytical expressions are given in both azimuth-elevation (az-el) and hour angle-declination (ha-dec) mounts for RF axis collimation error, encoder offset, nonorthogonality of axes, axis plane tilt, and structural flexure due to gravity loading. While the residual pointing errors (rms) after correction appear to be within the ten percent of the half-power beamwidth criterion commonly set for good pointing accuracy, the DSN has embarked on an extensive pointing improvement and modeling program aiming toward an order of magnitude higher pointing precision.
Systematic errors in long baseline oscillation experiments
Harris, Deborah A.; /Fermilab
2006-02-01
This article gives a brief overview of long baseline neutrino experiments and their goals, and then describes the different kinds of systematic errors that are encountered in these experiments. Particular attention is paid to the uncertainties that come about because of imperfect knowledge of neutrino cross sections and more generally how neutrinos interact in nuclei. Near detectors are planned for most of these experiments, and the extent to which certain uncertainties can be reduced by the presence of near detectors is also discussed.
Systematic errors in strong lens modeling
NASA Astrophysics Data System (ADS)
Johnson, Traci Lin; Sharon, Keren; Bayliss, Matthew B.
2015-08-01
The lensing community has made great strides in quantifying the statistical errors associated with strong lens modeling. However, we are just now beginning to understand the systematic errors. Quantifying these errors is pertinent to Frontier Fields science, as number counts and luminosity functions are highly sensitive to the value of the magnifications of background sources across the entire field of view. We are aware that models can be very different when modelers change their assumptions about the parameterization of the lensing potential (i.e., parametric vs. non-parametric models). However, models built while utilizing a single methodology can lead to inconsistent outcomes for different quantities, distributions, and qualities of redshift information regarding the multiple images used as constraints in the lens model. We investigate how varying the number of multiple image constraints and available redshift information of those constraints (ex., spectroscopic vs. photometric vs. no redshift) can influence the outputs of our parametric strong lens models, specifically, the mass distribution and magnifications of background sources. We make use of the simulated clusters by M. Meneghetti et al. and the first two Frontier Fields clusters, which have a high number of multiply imaged galaxies with spectroscopically-measured redshifts (or input redshifts, in the case of simulated clusters). This work will not only inform upon Frontier Field science, but also for work on the growing collection of strong lensing galaxy clusters, most of which are less massive and are capable of lensing a handful of galaxies, and are more prone to these systematic errors.
Systematic reviews, systematic error and the acquisition of clinical knowledge
2010-01-01
Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172
Reducing systematic error in weak lensing cluster surveys
Utsumi, Yousuke; Miyazaki, Satoshi; Hamana, Takashi; Geller, Margaret J.; Kurtz, Michael J.; Fabricant, Daniel G.; Dell'Antonio, Ian P.; Oguri, Masamune
2014-05-10
Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing ?-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ?3 deg{sup 2}. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg{sup 2}, where we expect ?2000 peaks based on our Subaru fields.
Reducing Systematic Error in Weak Lensing Cluster Surveys
NASA Astrophysics Data System (ADS)
Utsumi, Yousuke; Miyazaki, Satoshi; Geller, Margaret J.; Dell'Antonio, Ian P.; Oguri, Masamune; Kurtz, Michael J.; Hamana, Takashi; Fabricant, Daniel G.
2014-05-01
Weak lensing provides an important route toward collecting samples of clusters of galaxies selected by mass. Subtle systematic errors in image reduction can compromise the power of this technique. We use the B-mode signal to quantify this systematic error and to test methods for reducing this error. We show that two procedures are efficient in suppressing systematic error in the B-mode: (1) refinement of the mosaic CCD warping procedure to conform to absolute celestial coordinates and (2) truncation of the smoothing procedure on a scale of 10'. Application of these procedures reduces the systematic error to 20% of its original amplitude. We provide an analytic expression for the distribution of the highest peaks in noise maps that can be used to estimate the fraction of false peaks in the weak-lensing ?-signal-to-noise ratio (S/N) maps as a function of the detection threshold. Based on this analysis, we select a threshold S/N = 4.56 for identifying an uncontaminated set of weak-lensing peaks in two test fields covering a total area of ~3 deg2. Taken together these fields contain seven peaks above the threshold. Among these, six are probable systems of galaxies and one is a superposition. We confirm the reliability of these peaks with dense redshift surveys, X-ray, and imaging observations. The systematic error reduction procedures we apply are general and can be applied to future large-area weak-lensing surveys. Our high-peak analysis suggests that with an S/N threshold of 4.5, there should be only 2.7 spurious weak-lensing peaks even in an area of 1000 deg2, where we expect ~2000 peaks based on our Subaru fields. Based in part on data collected at Subaru Telescope and obtained from the SMOKA, which is operated by the Astronomy Data Center, National Astronomical Observatory of Japan.
Control by model error estimation
NASA Technical Reports Server (NTRS)
Likins, P. W.; Skelton, R. E.
1976-01-01
Modern control theory relies upon the fidelity of the mathematical model of the system. Truncated modes, external disturbances, and parameter errors in linear system models are corrected by augmenting to the original system of equations an 'error system' which is designed to approximate the effects of such model errors. A Chebyshev error system is developed for application to the Large Space Telescope (LST).
Significance in gamma ray astronomy with systematic errors
NASA Astrophysics Data System (ADS)
Spengler, Gerrit
2015-07-01
The influence of systematic errors on the calculation of the statistical significance of a ? -ray signal with the frequently invoked Li and Ma method is investigated. A simple criterion is derived to decide whether the Li and Ma method can be applied in the presence of systematic errors. An alternative method is discussed for cases where systematic errors are too large for the application of the original Li and Ma method. This alternative method reduces to the Li and Ma method when systematic errors are negligible. Finally, it is shown that the consideration of systematic errors will be important in many analyses of data from the planned Cherenkov Telescope Array.
Medication Errors in the Southeast Asian Countries: A Systematic Review
Salmasi, Shahrzad; Khan, Tahir Mehmood; Hong, Yet Hoi; Ming, Long Chiau; Wong, Tin Wui
2015-01-01
Background Medication error (ME) is a worldwide issue, but most studies on ME have been undertaken in developed countries and very little is known about ME in Southeast Asian countries. This study aimed systematically to identify and review research done on ME in Southeast Asian countries in order to identify common types of ME and estimate its prevalence in this region. Methods The literature relating to MEs in Southeast Asian countries was systematically reviewed in December 2014 by using; Embase, Medline, Pubmed, ProQuest Central and the CINAHL. Inclusion criteria were studies (in any languages) that investigated the incidence and the contributing factors of ME in patients of all ages. Results The 17 included studies reported data from six of the eleven Southeast Asian countries: five studies in Singapore, four in Malaysia, three in Thailand, three in Vietnam, one in the Philippines and one in Indonesia. There was no data on MEs in Brunei, Laos, Cambodia, Myanmar and Timor. Of the seventeen included studies, eleven measured administration errors, four focused on prescribing errors, three were done on preparation errors, three on dispensing errors and two on transcribing errors. There was only one study of reconciliation error. Three studies were interventional. Discussion The most frequently reported types of administration error were incorrect time, omission error and incorrect dose. Staff shortages, and hence heavy workload for nurses, doctor/nurse distraction, and misinterpretation of the prescription/medication chart, were identified as contributing factors of ME. There is a serious lack of studies on this topic in this region which needs to be addressed if the issue of ME is to be fully understood and addressed. PMID:26340679
More on Systematic Error in a Boyle's Law Experiment
NASA Astrophysics Data System (ADS)
McCall, Richard P.
2012-01-01
A recent article in The Physics Teacher describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.2-7
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
Adjoint Error Estimation for Linear Advection
Connors, J M; Banks, J W; Hittinger, J A; Woodward, C S
2011-03-30
An a posteriori error formula is described when a statistical measurement of the solution to a hyperbolic conservation law in 1D is estimated by finite volume approximations. This is accomplished using adjoint error estimation. In contrast to previously studied methods, the adjoint problem is divorced from the finite volume method used to approximate the forward solution variables. An exact error formula and computable error estimate are derived based on an abstractly defined approximation of the adjoint solution. This framework allows the error to be computed to an arbitrary accuracy given a sufficiently well resolved approximation of the adjoint solution. The accuracy of the computable error estimate provably satisfies an a priori error bound for sufficiently smooth solutions of the forward and adjoint problems. The theory does not currently account for discontinuities. Computational examples are provided that show support of the theory for smooth solutions. The application to problems with discontinuities is also investigated computationally.
Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.
Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith
2013-09-01
Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory. PMID:23967914
Wind Power Error Estimation in Resource Assessments
RodrÃguez, Osvaldo; del RÃo, JesÃºs A.; Jaramillo, Oscar A.; MartÃnez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Wind power error estimation in resource assessments.
RodrÃguez, Osvaldo; Del RÃo, JesÃºs A; Jaramillo, Oscar A; MartÃnez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Identifying and Reducing Systematic Errors in Chromosome Conformation Capture Data
Hahn, Seungsoo; Kim, Dongsup
2015-01-01
Chromosome conformation capture (3C)-based techniques have recently been used to uncover the mystic genomic architecture in the nucleus. These techniques yield indirect data on the distances between genomic loci in the form of contact frequencies that must be normalized to remove various errors. This normalization process determines the quality of data analysis. In this study, we describe two systematic errors that result from the heterogeneous local density of restriction sites and different local chromatin states, methods to identify and remove those artifacts, and three previously described sources of systematic errors in 3C-based data: fragment length, mappability, and local DNA composition. To explain the effect of systematic errors on the results, we used three different published data sets to show the dependence of the results on restriction enzymes and experimental methods. Comparison of the results from different restriction enzymes shows a higher correlation after removing systematic errors. In contrast, using different methods with the same restriction enzymes shows a lower correlation after removing systematic errors. Notably, the improved correlation of the latter case caused by systematic errors indicates that a higher correlation between results does not ensure the validity of the normalization methods. Finally, we suggest a method to analyze random error and provide guidance for the maximum reproducibility of contact frequency maps. PMID:26717152
Improved Systematic Pointing Error Model for the DSN Antennas
NASA Technical Reports Server (NTRS)
Rochblatt, David J.; Withington, Philip M.; Richter, Paul H.
2011-01-01
New pointing models have been developed for large reflector antennas whose construction is founded on elevation over azimuth mount. At JPL, the new models were applied to the Deep Space Network (DSN) 34-meter antenna s subnet for corrections of their systematic pointing errors; it achieved significant improvement in performance at Ka-band (32-GHz) and X-band (8.4-GHz). The new models provide pointing improvements relative to the traditional models by a factor of two to three, which translate to approximately 3-dB performance improvement at Ka-band. For radio science experiments where blind pointing performance is critical, the new innovation provides a new enabling technology. The model extends the traditional physical models with higher-order mathematical terms, thereby increasing the resolution of the model for a better fit to the underlying systematic imperfections that are the cause of antenna pointing errors. The philosophy of the traditional model was that all mathematical terms in the model must be traced to a physical phenomenon causing antenna pointing errors. The traditional physical terms are: antenna axis tilts, gravitational flexure, azimuth collimation, azimuth encoder fixed offset, azimuth and elevation skew, elevation encoder fixed offset, residual refraction, azimuth encoder scale error, and antenna pointing de-rotation terms for beam waveguide (BWG) antennas. Besides the addition of spherical harmonics terms, the new models differ from the traditional ones in that the coefficients for the cross-elevation and elevation corrections are completely independent and may be different, while in the traditional model, some of the terms are identical. In addition, the new software allows for all-sky or mission-specific model development, and can utilize the previously used model as an a priori estimate for the development of the updated models.
Error Estimates for Numerical Integration Rules
ERIC Educational Resources Information Center
Mercer, Peter R.
2005-01-01
The starting point for this discussion of error estimates is the fact that integrals that arise in Fourier series have properties that can be used to get improved bounds. This idea is extended to more general situations.
Systematic errors for a Mueller matrix dual rotating compensator ellipsometer.
Broch, Laurent; En Naciri, Aotmane; Johann, Luc
2008-06-01
The characterization of anisotropic materials and complex systems by ellipsometry has pushed the design of instruments to require the measurement of the full reflection Mueller matrix of the sample with a great precision. Therefore Mueller matrix ellipsometers have emerged over the past twenty years. The values of some coefficients of the matrix can be very small and errors due to noise or systematic errors can induce distored analysis. We present a detailed characterization of the systematic errors for a Mueller Matrix Ellipsometer in the dual-rotating compensator configuration. Starting from a general formalism, we derive explicit first-order expressions for the errors on all the coefficients of the Mueller matrix of the sample. The errors caused by inaccuracy of the azimuthal arrangement of the optical components and residual ellipticity introduced by imperfect optical elements are shown. A new method based on a four-zone averaging measurement is proposed to vanish the systematic errors. PMID:18545594
Bayes Error Rate Estimation Using Classifier Ensembles
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2003-01-01
The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.
Errors in quantum tomography: diagnosing systematic versus statistical errors
NASA Astrophysics Data System (ADS)
Langford, Nathan K.
2013-03-01
A prime goal of quantum tomography is to provide quantitatively rigorous characterization of quantum systems, be they states, processes or measurements, particularly for the purposes of trouble-shooting and benchmarking experiments in quantum information science. A range of techniques exist to enable the calculation of errors, such as Monte-Carlo simulations, but their quantitative value is arguably fundamentally flawed without an equally rigorous way of authenticating the quality of a reconstruction to ensure it provides a reasonable representation of the data, given the known noise sources. A key motivation for developing such a tool is to enable experimentalists to rigorously diagnose the presence of technical noise in their tomographic data. In this work, I explore the performance of the chi-squared goodness-of-fit test statistic as a measure of reconstruction quality. I show that its behaviour deviates noticeably from expectations for states lying near the boundaries of physical state space, severely undermining its usefulness as a quantitative tool precisely in the region which is of most interest in quantum information processing tasks. I suggest a simple, heuristic approach to compensate for these effects and present numerical simulations showing that this approach provides substantially improved performance.
Systematic parameter errors in inspiraling neutron star binaries.
Favata, Marc
2014-03-14
The coalescence of two neutron stars is an important gravitational wave source for LIGO and other detectors. Numerous studies have considered the precision with which binary parameters (masses, spins, Love numbers) can be measured. Here I consider the accuracy with which these parameters can be determined in the presence of systematic errors due to waveform approximations. These approximations include truncation of the post-Newtonian (PN) series and neglect of neutron star (NS) spin, tidal deformation, or orbital eccentricity. All of these effects can yield systematic errors that exceed statistical errors for plausible parameter values. In particular, neglecting spin, eccentricity, or high-order PN terms causes a significant bias in the NS Love number. Tidal effects will not be measurable with PN inspiral waveforms if these systematic errors are not controlled. PMID:24679276
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
Zhang Le; Timbie, Peter; Karakci, Ata; Korotkov, Andrei; Tucker, Gregory S.; Sutter, Paul M.; Wandelt, Benjamin D.; Bunn, Emory F.
2013-06-01
We investigate the impact of instrumental systematic errors in interferometric measurements of the cosmic microwave background (CMB) temperature and polarization power spectra. We simulate interferometric CMB observations to generate mock visibilities and estimate power spectra using the statistically optimal maximum likelihood technique. We define a quadratic error measure to determine allowable levels of systematic error that does not induce power spectrum errors beyond a given tolerance. As an example, in this study we focus on differential pointing errors. The effects of other systematics can be simulated by this pipeline in a straightforward manner. We find that, in order to accurately recover the underlying B-modes for r = 0.01 at 28 < l < 384, Gaussian-distributed pointing errors must be controlled to 0. Degree-Sign 7 root mean square for an interferometer with an antenna configuration similar to QUBIC, in agreement with analytical estimates. Only the statistical uncertainty for 28 < l < 88 would be changed at {approx}10% level. With the same instrumental configuration, we find that the pointing errors would slightly bias the 2{sigma} upper limit of the tensor-to-scalar ratio r by {approx}10%. We also show that the impact of pointing errors on the TB and EB measurements is negligibly small.
Error estimation for variational nodal calculations
Zhang, H.; Lewis, E.E.
1998-12-31
Adaptive grid methods are widely employed in finite element solutions to both solid and fluid mechanics problems. Either the size of the element is reduced (h refinement) or the order of the trial function is increased (p refinement) locally to improve the accuracy of the solution without a commensurate increase in computational effort. Success of these methods requires effective local error estimates to determine those parts of the problem domain where the solution should be refined. Adaptive methods have recently been applied to the spatial variables of the discrete ordinates equations. As a first step in the development of adaptive methods that are compatible with the variational nodal method, the authors examine error estimates for use in conjunction with spatial variables. The variational nodal method lends itself well to p refinement because the space-angle trial functions are hierarchical. Here they examine an error estimator for use with spatial p refinement for the diffusion approximation. Eventually, angular refinement will also be considered using spherical harmonics approximations.
Students' Systematic Errors When Solving Kinetic and Chemical Equilibrium Problems.
ERIC Educational Resources Information Center
BouJaoude, Saouma
Although students' misconceptions about the concept of chemical equilibrium has been the focus of numerous investigations, few have investigated students' systematic errors when solving equilibrium problems at the college level. Students (n=189) enrolled in the second semester of a first year chemistry course for science and engineering majors atâ€¦
Bayesian conformity assessment in presence of systematic measurement errors
NASA Astrophysics Data System (ADS)
Carobbi, Carlo; Pennecchi, Francesca
2016-04-01
Conformity assessment of the distribution of the values of a quantity is investigated by using a Bayesian approach. The effect of systematic, non-negligible measurement errors is taken into account. The analysis is general, in the sense that the probability distribution of the quantity can be of any kind, that is even different from the ubiquitous normal distribution, and the measurement model function, linking the measurand with the observable and non-observable influence quantities, can be non-linear. Further, any joint probability density function can be used to model the available knowledge about the systematic errors. It is demonstrated that the result of the Bayesian analysis here developed reduces to the standard result (obtained through a frequentistic approach) when the systematic measurement errors are negligible. A consolidated frequentistic extension of such standard result, aimed at including the effect of a systematic measurement error, is directly compared with the Bayesian result, whose superiority is demonstrated. Application of the results here obtained to the derivation of the operating characteristic curves used for sampling plans for inspection by variables is also introduced.
Density Estimation Framework for Model Error Assessment
NASA Astrophysics Data System (ADS)
Sargsyan, K.; Liu, Z.; Najm, H. N.; Safta, C.; VanBloemenWaanders, B.; Michelsen, H. A.; Bambha, R.
2014-12-01
In this work we highlight the importance of model error assessment in physical model calibration studies. Conventional calibration methods often assume the model is perfect and account for data noise only. Consequently, the estimated parameters typically have biased values that implicitly compensate for model deficiencies. Moreover, improving the amount and the quality of data may not improve the parameter estimates since the model discrepancy is not accounted for. In state-of-the-art methods model discrepancy is explicitly accounted for by enhancing the physical model with a synthetic statistical additive term, which allows appropriate parameter estimates. However, these statistical additive terms do not increase the predictive capability of the model because they are tuned for particular output observables and may even violate physical constraints. We introduce a framework in which model errors are captured by allowing variability in specific model components and parameterizations for the purpose of achieving meaningful predictions that are both consistent with the data spread and appropriately disambiguate model and data errors. Here we cast model parameters as random variables, embedding the calibration problem within a density estimation framework. Further, we calibrate for the parameters of the joint input density. The likelihood function for the associated inverse problem is degenerate, therefore we use Approximate Bayesian Computation (ABC) to build prediction-constraining likelihoods and illustrate the strengths of the method on synthetic cases. We also apply the ABC-enhanced density estimation to the TransCom 3 CO2 intercomparison study (Gurney, K. R., et al., Tellus, 55B, pp. 555-579, 2003) and calibrate 15 transport models for regional carbon sources and sinks given atmospheric CO2 concentration measurements.
Belashov, A V; Petrov, N V; Semenova, I V
2016-01-01
This paper explores the concept of image-plane holographic tomography applied to the measurements of laser-induced thermal gradients in an aqueous solution of a photosensitizer with respect to the reconstruction accuracy of three-dimensional variations of the refractive index. It uses the least-squares estimation algorithm to reconstruct refractive index variations in each holographic projection. Along with the bitelecentric optical system, transferring focused projection to the sensor plane, it facilitates the elimination of diffraction artifacts and noise suppression. This work estimates the influence of typical random and systematic errors in experiments and concludes that random errors such as accidental measurement errors or noise presence can be significantly suppressed by increasing the number of recorded digital holograms. On the contrary, even comparatively small systematic errors such as a displacement of the rotation axis projection in the course of a reconstruction procedure can significantly distort the results. PMID:26835625
Systematic lossy forward error protection for error-resilient digital video broadcasting
NASA Astrophysics Data System (ADS)
Rane, Shantanu D.; Aaron, Anne; Girod, Bernd
2004-01-01
We present a novel scheme for error-resilient digital video broadcasting,using the Wyner-Ziv coding paradigm. We apply the general framework of systematic lossy source-channel coding to generate a supplementary bitstream that can correct transmission errors in the decoded video waveform up to a certain residual distortion. The systematic portion consists of a conventional MPEG-coded bitstream, which is transmitted over the error-prone channel without forward error correction.The supplementary bitstream is a low rate representation of the transmitted video sequence generated using Wyner-Ziv encoding. We use the conventionally decoded error-concealed MPEG video sequence as side information to decode the Wyner-Ziv bits. The decoder combines the error-prone side information and the Wyner-Ziv description to yield an improved decoded video signal. Our results indicate that, over a large range of channel error probabilities, this scheme yields superior video quality when compared with traditional forward error correction techniques employed in digital video broadcasting.
The Effect of Systematic Error in Forced Oscillation Testing
NASA Technical Reports Server (NTRS)
Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.
2012-01-01
One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.
Weak gravitational lensing systematic errors in the dark energy survey
NASA Astrophysics Data System (ADS)
Plazas, Andres Alejandro
Dark energy is one of the most important unsolved problems in modern Physics, and weak gravitational lensing (WL) by mass structures along the line of sight ("cosmic shear") is a promising technique to learn more about its nature. However, WL is subject to numerous systematic errors which induce biases in measured cosmological parameters and prevent the development of its full potential. In this thesis, we advance the understanding of WL systematics in the context of the Dark Energy Survey (DES). We develop a testing suite to assess the performance of the shapelet-based DES WL measurement pipeline. We determine that the measurement bias of the parameters of our Point Spread Function (PSF) model scales as (S/N )-2, implying that a PSF S/N > 75 is needed to satisfy DES requirements. PSF anisotropy suppression also satisfies the requirements for source galaxies with S/N â‰³ 45. For low-noise, marginally-resolved exponential galaxies, the shear calibration errors are up to about 0.06% (for shear values â‰² 0.075). Galaxies with S/N â‰³ 75 present about 1% errors, sufficient for first-year DES data. However, more work is needed to satisfy full-area DES requirements, especially in the high-noise regime. We then implement tests to validate the high accuracy of the map between pixel coordinates and sky coordinates (astrometric solution), which is crucial to detect the required number of galaxies for WL in stacked images. We also study the effect of atmospheric dispersion on cosmic shear experiments such as DES and the Large Synoptic Survey Telescope (LSST) in the four griz bands. For DES (LSST), we find systematics in the g and r (g, r, and i) bands that are larger than required. We find that a simple linear correction in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r ( i) band for DES (LSST). More complex corrections will likely reduce the systematic cosmic-shear errors below statistical errors for LSST r band. However, g-band dispersion effects remain large enough for induced systematics to dominate the statistical error of both surveys, so cosmic-shear measurements should rely on the redder bands.
Spatial reasoning in the treatment of systematic sensor errors
Beckerman, M.; Jones, J.P.; Mann, R.C.; Farkas, L.A.; Johnston, S.E.
1988-01-01
In processing ultrasonic and visual sensor data acquired by mobile robots systematic errors can occur. The sonar errors include distortions in size and surface orientation due to the beam resolution, and false echoes. The vision errors include, among others, ambiguities in discriminating depth discontinuities from intensity gradients generated by variations in surface brightness. In this paper we present a methodology for the removal of systematic errors using data from the sonar sensor domain to guide the processing of information in the vision domain, and vice versa. During the sonar data processing some errors are removed from 2D navigation maps through pattern analyses and consistent-labelling conditions, using spatial reasoning about the sonar beam and object characteristics. Others are removed using visual information. In the vision data processing vertical edge segments are extracted using a Canny-like algorithm, and are labelled. Object edge features are then constructed from the segments using statistical and spatial analyses. A least-squares method is used during the statistical analysis, and sonar range data are used in the spatial analysis. 7 refs., 10 figs.
Ultraspectral Sounding Retrieval Error Budget and Estimation
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, L. Larrabee; Yang, Ping
2011-01-01
The ultraspectral infrared radiances obtained from satellite observations provide atmospheric, surface, and/or cloud information. The intent of the measurement of the thermodynamic state is the initialization of weather and climate models. Great effort has been given to retrieving and validating these atmospheric, surface, and/or cloud properties. Error Consistency Analysis Scheme (ECAS), through fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of absolute and standard deviation of differences in both spectral radiance and retrieved geophysical parameter domains. The retrieval error is assessed through ECAS without assistance of other independent measurements such as radiosonde data. ECAS re-evaluates instrument random noise, and establishes the link between radiometric accuracy and retrieved geophysical parameter accuracy. ECAS can be applied to measurements of any ultraspectral instrument and any retrieval scheme with associated RTM. In this paper, ECAS is described and demonstration is made with the measurements of the METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI)..
SYSTEMATIC CONTINUUM ERRORS IN THE Ly{alpha} FOREST AND THE MEASURED TEMPERATURE-DENSITY RELATION
Lee, Khee-Gan
2012-07-10
Continuum fitting uncertainties are a major source of error in estimates of the temperature-density relation (usually parameterized as a power-law, T {proportional_to} {Delta}{sup {gamma}-1}) of the intergalactic medium through the flux probability distribution function (PDF) of the Ly{alpha} forest. Using a simple order-of-magnitude calculation, we show that few percent-level systematic errors in the placement of the quasar continuum due to, e.g., a uniform low-absorption Gunn-Peterson component could lead to errors in {gamma} of the order of unity. This is quantified further using a simple semi-analytic model of the Ly{alpha} forest flux PDF. We find that under(over)estimates in the continuum level can lead to a lower (higher) measured value of {gamma}. By fitting models to mock data realizations generated with current observational errors, we find that continuum errors can cause a systematic bias in the estimated temperature-density relation of ({delta}({gamma})) Almost-Equal-To -0.1, while the error is increased to {sigma}{sub {gamma}} Almost-Equal-To 0.2 compared to {sigma}{sub {gamma}} Almost-Equal-To 0.1 in the absence of continuum errors.
Error concealment using multiresolution motion estimation
NASA Astrophysics Data System (ADS)
Tsai, Augustine; Wiener, Stephen M.; Wilder, Joseph
1995-10-01
An error concealment scheme for MPEG video networking is presented. Cell loss occurs in the presence of network congestion and buffer overflow. This phenomenon of cell loss transforms into lost image blocks in the decoding process, which can severely degrade the viewing quality. The new method differs from the conventional concealment by its exploitation of spatial and temporal redundancies in large scale. The motion estimation is carried out by registering images within a multiresolution pyramid. The global motion is estimated in the lowest resolution level, and is then used to update and refine the local motion. The local motion is further refined iteratively at higher resolution levels. An affine transform is used to extract translation, scaling and rotation parameters. In many applications where there is significant camera motion (e.g., remote surveillance), the new method performs better than the conventional concealment.
Factoring Algebraic Error for Relative Pose Estimation
Lindstrom, P; Duchaineau, M
2009-03-09
We address the problem of estimating the relative pose, i.e. translation and rotation, of two calibrated cameras from image point correspondences. Our approach is to factor the nonlinear algebraic pose error functional into translational and rotational components, and to optimize translation and rotation independently. This factorization admits subproblems that can be solved using direct methods with practical guarantees on global optimality. That is, for a given translation, the corresponding optimal rotation can directly be determined, and vice versa. We show that these subproblems are equivalent to computing the least eigenvector of second- and fourth-order symmetric tensors. When neither translation or rotation is known, alternating translation and rotation optimization leads to a simple, efficient, and robust algorithm for pose estimation that improves on the well-known 5- and 8-point methods.
Mattis, Ina; Tesche, Matthias; Grein, Matthias; Freudenthaler, Volker; Müller, Detlef
2009-05-10
Signals of many types of aerosol lidars can be affected with a significant systematic error, if depolarizing scatterers are present in the atmosphere. That error is caused by a polarization-dependent receiver transmission. In this contribution we present an estimation of the magnitude of this systematic error. We show that lidar signals can be biased by more than 20%, if linearly polarized laser light is emitted, if both polarization components of the backscattered light are measured with a single detection channel, and if the receiver transmissions for these two polarization components differ by more than 50%. This signal bias increases with increasing ratio between the two transmission values (transmission ratio) or with the volume depolarization ratio of the scatterers. The resulting error of the particle backscatter coefficient increases with decreasing backscatter ratio. If the particle backscatter coefficients are to have an accuracy better than 5%, the transmission ratio has to be in the range between 0.85 and 1.15. We present a method to correct the measured signals for this bias. We demonstrate an experimental method for the determination of the transmission ratio. We use collocated measurements of a lidar system strongly affected by this signal bias and an unbiased reference system to verify the applicability of the correction scheme. The errors in the case of no correction are illustrated with example measurements of fresh Saharan dust. PMID:19424398
Patient safety strategies targeted at diagnostic errors: a systematic review.
McDonald, Kathryn M; Matesic, Brian; Contopoulos-Ioannidis, Despina G; Lonhart, Julia; Schmidt, Eric; Pineda, Noelle; Ioannidis, John P A
2013-03-01
Missed, delayed, or incorrect diagnosis can lead to inappropriate patient care, poor patient outcomes, and increased cost. This systematic review analyzed evaluations of interventions to prevent diagnostic errors. Searches used MEDLINE (1966 to October 2012), the Agency for Healthcare Research and Quality's Patient Safety Network, bibliographies, and prior systematic reviews. Studies that evaluated any intervention to decrease diagnostic errors in any clinical setting and with any study design were eligible, provided that they addressed a patient-related outcome. Two independent reviewers extracted study data and rated study quality. There were 109 studies that addressed 1 or more intervention categories: personnel changes (n = 6), educational interventions (n = 11), technique (n = 23), structured process changes (n = 27), technology-based systems interventions (n = 32), and review methods (n = 38). Of 14 randomized trials, which were rated as having mostly low to moderate risk of bias, 11 reported interventions that reduced diagnostic errors. Evidence seemed strongest for technology-based systems (for example, text message alerting) and specific techniques (for example, testing equipment adaptations). Studies provided no information on harms, cost, or contextual application of interventions. Overall, the review showed a growing field of diagnostic error research and categorized and identified promising interventions that warrant evaluation in large studies across diverse settings. PMID:23460094
An unbiased estimator of peculiar velocity with Gaussian distributed errors for precision cosmology
NASA Astrophysics Data System (ADS)
Watkins, Richard; Feldman, Hume A.
2015-06-01
We introduce a new estimator of the peculiar velocity of a galaxy or group of galaxies from redshift and distance estimates. This estimator results in peculiar velocity estimates which are statistically unbiased and have Gaussian distributed errors, thus complying with the assumptions of analyses that rely on individual peculiar velocities. We apply this estimator to the SFI++ and the Cosmicflows-2 catalogues of galaxy distances and, since peculiar velocity estimates of distant galaxies are error dominated, examine their error distributions. The adoption of the new estimator significantly improves the accuracy and validity of studies of the large-scale peculiar velocity field that assume Gaussian distributed velocity errors and eliminates potential systematic biases, thus helping to bring peculiar velocity analysis into the era of precision cosmology. In addition, our method of examining the distribution of velocity errors should provide a useful check of the statistics of large peculiar velocity catalogues, particularly those that are compiled out of data from multiple sources.
Systematic Approach for Decommissioning Planning and Estimating
Dam, A. S.
2002-02-26
Nuclear facility decommissioning, satisfactorily completed at the lowest cost, relies on a systematic approach to the planning, estimating, and documenting the work. High quality information is needed to properly perform the planning and estimating. A systematic approach to collecting and maintaining the needed information is recommended using a knowledgebase system for information management. A systematic approach is also recommended to develop the decommissioning plan, cost estimate and schedule. A probabilistic project cost and schedule risk analysis is included as part of the planning process. The entire effort is performed by a experienced team of decommissioning planners, cost estimators, schedulers, and facility knowledgeable owner representatives. The plant data, work plans, cost and schedule are entered into a knowledgebase. This systematic approach has been used successfully for decommissioning planning and cost estimating for a commercial nuclear power plant. Elements of this approach have been used for numerous cost estimates and estimate reviews. The plan and estimate in the knowledgebase should be a living document, updated periodically, to support decommissioning fund provisioning, with the plan ready for use when the need arises.
ON THE ESTIMATION OF SYSTEMATIC UNCERTAINTIES OF STAR FORMATION HISTORIES
Dolphin, Andrew E.
2012-05-20
In most star formation history (SFH) measurements, the reported uncertainties are those due to effects whose sizes can be readily measured: Poisson noise, adopted distance and extinction, and binning choices in the solution itself. However, the largest source of error, systematics in the adopted isochrones, is usually ignored and very rarely explicitly incorporated into the uncertainties. I propose a process by which estimates of the uncertainties due to evolutionary models can be incorporated into the SFH uncertainties. This process relies on application of shifts in temperature and luminosity, the sizes of which must be calibrated for the data being analyzed. While there are inherent limitations, the ability to estimate the effect of systematic errors and include them in the overall uncertainty is significant. The effects of this are most notable in the case of shallow photometry, with which SFH measurements rely on evolved stars.
Estimating IMU heading error from SAR images.
Doerry, Armin Walter
2009-03-01
Angular orientation errors of the real antenna for Synthetic Aperture Radar (SAR) will manifest as undesired illumination gradients in SAR images. These gradients can be measured, and the pointing error can be calculated. This can be done for single images, but done more robustly using multi-image methods. Several methods are provided in this report. The pointing error can then be fed back to the navigation Kalman filter to correct for problematic heading (yaw) error drift. This can mitigate the need for uncomfortable and undesired IMU alignment maneuvers such as S-turns.
Martin, D.L.
1992-01-01
Water-leaving radiances and phytoplankton pigment concentrations are calculated from Coastal Zone Color Scanner (CZCS) total radiance measurements by separating atmospheric Rayleigh and aerosol radiances from the total radiance signal measured at the satellite. Multiple scattering interactions between Rayleigh and aerosol components together with other meteorologically-moderated radiances cause systematic errors in calculated water-leaving radiances and produce errors in retrieved phytoplankton pigment concentrations. This thesis developed techniques which minimize the effects of these systematic errors in Level IIA CZCS imagery. Results of previous radiative transfer modeling by Gordon and Castano are extended to predict the pixel-specific magnitude of systematic errors caused by Rayleigh-aerosol multiple scattering interactions. CZCS orbital passes in which the ocean is viewed through a modeled, physically realistic atmosphere are simulated mathematically and radiance-retrieval errors are calculated for a range of aerosol optical depths. Pixels which exceed an error threshold in the simulated CZCS image are rejected in a corresponding actual image. Meteorological phenomena also cause artifactual errors in CZCS-derived phytoplankton pigment concentration imagery. Unless data contaminated with these effects are masked and excluded from analysis, they will be interpreted as containing valid biological information and will contribute significantly to erroneous estimates of phytoplankton temporal and spatial variability. A method is developed which minimizes these errors through a sequence of quality-control procedures including the calculation of variable cloud-threshold radiances, the computation of the extent of electronic overshoot from bright reflectors, and the imposition of a buffer zone around clouds to exclude contaminated data.
CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes
NASA Technical Reports Server (NTRS)
Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.
2012-01-01
Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.
Inertial and Magnetic Sensor Data Compression Considering the Estimation Error
Suh, Young Soo
2009-01-01
This paper presents a compression method for inertial and magnetic sensor data, where the compressed data are used to estimate some states. When sensor data are bounded, the proposed compression method guarantees that the compression error is smaller than a prescribed bound. The manner in which this error bound affects the bit rate and the estimation error is investigated. Through the simulation, it is shown that the estimation error is improved by 18.81% over a test set of 12 cases compared with a filter that does not use the compression error bound. PMID:22454564
A Note on Confidence Interval Estimation and Margin of Error
ERIC Educational Resources Information Center
Gilliland, Dennis; Melfi, Vince
2010-01-01
Confidence interval estimation is a fundamental technique in statistical inference. Margin of error is used to delimit the error in estimation. Dispelling misinterpretations that teachers and students give to these terms is important. In this note, we give examples of the confusion that can arise in regard to confidence interval estimation andâ€¦
The Effects of Computational Modeling Errors on the Estimation of Statistical Mechanical Variables.
Faver, John C; Yang, Wei; Merz, Kenneth M
2012-10-01
Computational models used in the estimation of thermodynamic quantities of large chemical systems often require approximate energy models that rely on parameterization and cancellation of errors to yield agreement with experimental measurements. In this work, we show how energy function errors propagate when computing statistical mechanics-derived thermodynamic quantities. Assuming that each microstate included in a statistical ensemble has a measurable amount of error in its calculated energy, we derive low-order expressions for the propagation of these errors in free energy, average energy, and entropy. Through gedanken experiments we show the expected behavior of these error propagation formulas on hypothetical energy surfaces. For very large microstate energy errors, these low-order formulas disagree with estimates from Monte Carlo simulations of error propagation. Hence, such simulations of error propagation may be required when using poor potential energy functions. Propagated systematic errors predicted by these methods can be removed from computed quantities, while propagated random errors yield uncertainty estimates. Importantly, we find that end-point free energy methods maximize random errors and that local sampling of potential energy wells decreases random error significantly. Hence, end-point methods should be avoided in energy computations and should be replaced by methods that incorporate local sampling. The techniques described herein will be used in future work involving the calculation of free energies of biomolecular processes, where error corrections are expected to yield improved agreement with experiment. PMID:23413365
Statistical and systematic errors in redshift-space distortion measurements from large surveys
NASA Astrophysics Data System (ADS)
Bianchi, D.; Guzzo, L.; Branchini, E.; Majerotto, E.; de la Torre, S.; Marulli, F.; Moscardini, L.; Angulo, R. E.
2012-12-01
We investigate the impact of statistical and systematic errors on measurements of linear redshift-space distortions (RSD) in future cosmological surveys by analysing large catalogues of dark matter haloes from the baryonic acoustic oscillation simulations at the Institute for Computational Cosmology. These allow us to estimate the dependence of errors on typical survey properties, as volume, galaxy density and mass (i.e. bias factor) of the adopted tracer. We find that measures of the specific growth rate Î² = f/b using the Hamilton/Kaiser harmonic expansion of the redshift-space correlation function Î¾(rp, Ï€) on scales larger than 3 h-1 Mpc are typically underestimated by up to 10 per cent for galaxy-sized haloes. This is significantly larger than the corresponding statistical errors, which amount to a few per cent, indicating the importance of non-linear improvements to the Kaiser model, to obtain accurate measurements of the growth rate. The systematic error shows a diminishing trend with increasing bias value (i.e. mass) of the haloes considered. We compare the amplitude and trends of statistical errors as a function of survey parameters to predictions obtained with the Fisher information matrix technique. This is what is usually adopted to produce RSD forecasts, based on the Feldman-Kaiser-Peacock prescription for the errors on the power spectrum. We show that this produces parameter errors fairly similar to the standard deviations from the halo catalogues, provided it is applied to strictly linear scales in Fourier space (k<0.2 h Mpc-1). Finally, we combine our measurements to define and calibrate an accurate scaling formula for the relative error on Î² as a function of the same parameters, which closely matches the simulation results in all explored regimes. This provides a handy and plausibly more realistic alternative to the Fisher matrix approach, to quickly and accurately predict statistical errors on RSD expected from future surveys.
Evaluation and suppression of systematic errors in optical subwavelength gratings
NASA Astrophysics Data System (ADS)
Schnabel, Bernd; Kley, Ernst-Bernhard
2000-10-01
Optical subwavelength gratings are of growing interest for the realization of special optical effects such as artificial birefringence or antireflection layers, for example. The optical properties of such elements strongly depend on the accuracy of the fabrication technology and tools. Although e- beam lithography is known to be a high-accuracy fabrication method, even with this technology systematic grating errors may occur which affect the optical function. One example is the existence of grating ghosts (i.e. undesired propagating diffraction orders) which may occur even in the case of subwavelength grating periods. In this paper we describe how this effect is related to the address grid of the e-beam writer. Measurements of the diffraction spectrum of subwavelength gratings indicate the importance of this effect. The adaptation of grating period and address grid allows the fabrication of ghost-free subwavelength gratings.
TRAINING ERRORS AND RUNNING RELATED INJURIES: A SYSTEMATIC REVIEW
Buist, Ida; Sørensen, Henrik; Lind, Martin; Rasmussen, Sten
2012-01-01
Purpose: The purpose of this systematic review was to examine the link between training characteristics (volume, duration, frequency, and intensity) and running related injuries. Methods: A systematic search was performed in PubMed, Web of Science, Embase, and SportDiscus. Studies were included if they examined novice, recreational, or elite runners between the ages of 18 and 65. Exposure variables were training characteristics defined as volume, distance or mileage, time or duration, frequency, intensity, speed or pace, or similar terms. The outcome of interest was Running Related Injuries (RRI) in general or specific RRI in the lower extremity or lower back. Methodological quality was evaluated using quality assessment tools of 11 to 16 items. Results: After examining 4561 titles and abstracts, 63 articles were identified as potentially relevant. Finally, nine retrospective cohort studies, 13 prospective cohort studies, six case-control studies, and three randomized controlled trials were included. The mean quality score was 44.1%. Conflicting results were reported on the relationships between volume, duration, intensity, and frequency and RRI. Conclusion: It was not possible to identify which training errors were related to running related injuries. Still, well supported data on which training errors relate to or cause running related injuries is highly important for determining proper prevention strategies. If methodological limitations in measuring training variables can be resolved, more work can be conducted to define training and the interactions between different training variables, create several hypotheses, test the hypotheses in a large scale prospective study, and explore cause and effect relationships in randomized controlled trials. Level of evidence: 2a PMID:22389869
Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)
NASA Technical Reports Server (NTRS)
Adler, Robert; Gu, Guojun; Huffman, George
2012-01-01
A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.
Study of the systematic errors in HAUP measurements
NASA Astrophysics Data System (ADS)
Folcia, C. L.; Ortega, J.; Etxebarria, J.
1999-09-01
A phenomenological model is proposed to account for the systematic errors characteristic of the HAUP (high-accuracy universal polarimeter) technique. The model is based on the assumption that the sample surface possesses effective dichroic properties due to polishing, inhomogeneities or differential Fresnel reflection. It is found that there is a sample contribution both to the parasitic ellipticities of the polarizers and to the so-called icons/Journals/Common/delta" ALT="delta" ALIGN="TOP"/> Y error. This contribution adds to those which are intrinsic to the optical device. Measurements are presented in three test materials: LiNbO3, Rb2ZnCl4 and SiO2. The results are interpreted in the light of the proposed model. For the first two materials the surface effects are visible in the form of spurious linear dichroism and residual ellipticity. For the third case, the problem of multiple reflections arises, since the sample had a high degree of plane parallelism. The influence of these additional contributions is eliminated experimentally. Finally, in view of all the results, some criteria are established for the sample optimization in HAUP measurements.
Using ridge regression in systematic pointing error corrections
NASA Technical Reports Server (NTRS)
Guiar, C. N.
1988-01-01
A pointing error model is used in the antenna calibration process. Data from spacecraft or radio star observations are used to determine the parameters in the model. However, the regression variables are not truly independent, displaying a condition known as multicollinearity. Ridge regression, a biased estimation technique, is used to combat the multicollinearity problem. Two data sets pertaining to Voyager 1 spacecraft tracking (days 105 and 106 of 1987) were analyzed using both linear least squares and ridge regression methods. The advantages and limitations of employing the technique are presented. The problem is not yet fully resolved.
Semiclassical Dynamicswith Exponentially Small Error Estimates
NASA Astrophysics Data System (ADS)
Hagedorn, George A.; Joye, Alain
We construct approximate solutions to the time-dependent Schrödingerequation
Analysis of Systematic Errors in the MuLan Muon Lifetime Experiment
NASA Astrophysics Data System (ADS)
McNabb, Ronald
2007-04-01
The MuLan experiment seeks to measure the muon lifetime to 1 ppm. To achieve this level of precision a multitude of systematic errors must be investigated. Analysis of the 2004 data set has been completed, resulting in a total error of 11 ppm(10 ppm statistical, 5 ppm systematic). Data obtained in 2006 are currently being analyzed with an expected statistical error of 1.3 ppm. This talk will discuss the methods used to study and reduce the systematic errors for the 2004 data set and improvements for the 2006 data set which should reduce the systematic errors even further.
Estimating errors in least-squares fitting
NASA Technical Reports Server (NTRS)
Richter, P. H.
1995-01-01
While least-squares fitting procedures are commonly used in data analysis and are extensively discussed in the literature devoted to this subject, the proper assessment of errors resulting from such fits has received relatively little attention. The present work considers statistical errors in the fitted parameters, as well as in the values of the fitted function itself, resulting from random errors in the data. Expressions are derived for the standard error of the fit, as a function of the independent variable, for the general nonlinear and linear fitting problems. Additionally, closed-form expressions are derived for some examples commonly encountered in the scientific and engineering fields, namely ordinary polynomial and Gaussian fitting functions. These results have direct application to the assessment of the antenna gain and system temperature characteristics, in addition to a broad range of problems in data analysis. The effects of the nature of the data and the choice of fitting function on the ability to accurately model the system under study are discussed, and some general rules are deduced to assist workers intent on maximizing the amount of information obtained form a given set of measurements.
NASA Astrophysics Data System (ADS)
Zhang, L.; Xu, M.; Huang, M.; Yu, G.
2009-11-01
Modeling ecosystem carbon cycle on the regional and global scales is crucial to the prediction of future global atmospheric CO2 concentration and thus global temperature which features large uncertainties due mainly to the limitations in our knowledge and in the climate and ecosystem models. There is a growing body of research on parameter estimation against available carbon measurements to reduce model prediction uncertainty at regional and global scales. However, the systematic errors with the observation data have rarely been investigated in the optimization procedures in previous studies. In this study, we examined the feasibility of reducing the impact of systematic errors on parameter estimation using normalization methods, and evaluated the effectiveness of three normalization methods (i.e. maximum normalization, min-max normalization, and z-score normalization) on inversing key parameters, for example the maximum carboxylation rate (Vcmax,25) at a reference temperature of 25Â°C, in a process-based ecosystem model for deciduous needle-leaf forests in northern China constrained by the leaf area index (LAI) data. The LAI data used for parameter estimation were composed of the model output LAI (truth) and various designated systematic errors and random errors. We found that the estimation of Vcmax,25 could be severely biased with the composite LAI if no normalization was taken. Compared with the maximum normalization and the min-max normalization methods, the z-score normalization method was the most robust in reducing the impact of systematic errors on parameter estimation. The most probable values of estimated Vcmax,25 inversed by the z-score normalized LAI data were consistent with the true parameter values as in the model inputs though the estimation uncertainty increased with the magnitudes of random errors in the observations. We concluded that the z-score normalization method should be applied to the observed or measured data to improve model parameter estimation, especially when the potential errors in the constraining (observation) datasets are unknown.
Deconvolution Estimation in Measurement Error Models: The R Package decon
Wang, Xiao-Feng; Wang, Bin
2011-01-01
Data from many scientific areas often come with measurement error. Density or distribution function estimation from contaminated data and nonparametric regression with errors-in-variables are two important topics in measurement error models. In this paper, we present a new software package decon for R, which contains a collection of functions that use the deconvolution kernel methods to deal with the measurement error problems. The functions allow the errors to be either homoscedastic or heteroscedastic. To make the deconvolution estimators computationally more efficient in R, we adapt the fast Fourier transform algorithm for density estimation with error-free data to the deconvolution kernel estimation. We discuss the practical selection of the smoothing parameter in deconvolution methods and illustrate the use of the package through both simulated and real examples. PMID:21614139
Mean-square error bounds for reduced-error linear state estimators
NASA Technical Reports Server (NTRS)
Baram, Y.; Kalit, G.
1987-01-01
The mean-square error of reduced-order linear state estimators for continuous-time linear systems is investigated. Lower and upper bounds on the minimal mean-square error are presented. The bounds are readily computable at each time-point and at steady state from the solutions to the Ricatti and the Lyapunov equations. The usefulness of the error bounds for the analysis and design of reduced-order estimators is illustrated by a practical numerical example.
Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities
Auflick, Jack L.
1999-04-21
Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.
Fisher classifier and its probability of error estimation
NASA Technical Reports Server (NTRS)
Chittineni, C. B.
1979-01-01
Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.
Minor Planet Observations to Identify Reference System Systematic Errors
NASA Astrophysics Data System (ADS)
Hemenway, Paul D.; Duncombe, R. L.; Castelaz, M. W.
2011-04-01
In the 1930's Brouwer proposed using minor planets to correct the Fundamental System of celestial coordinates. Since then, many projects have used or proposed to use visual, photographic, photo detector, and space based observations to that end. From 1978 to 1990, a project was undertaken at the University of Texas utilizing the long focus and attendant advantageous plate scale (c. 7.37"/mm) of the 2.1m Otto Struve reflector's Cassegrain focus. The project followed precepts given in 1979. The program had several potential advantages over previous programs including high inclination orbits to cover half the celestial sphere, and, following Kristensen, the use of crossing points to remove entirely systematic star position errors from some observations. More than 1000 plates were obtained of 34 minor planets as part of this project. In July 2010 McDonald Observatory donated the plates to the Pisgah Astronomical Research Institute (PARI) in North Carolina. PARI is in the process of renovating the Space Telescope Science Institute GAMMA II modified PDS microdensitometer to scan the plates in the archives. We plan to scan the minor planet plates, reduce the plates to the densified ICRS using the UCAC4 positions (or the best available positions at the time of the reductions), and then determine the utility of attempting to find significant systematic corrections. Here we report the current status of various aspects of the project. Support from the National Science Foundation in the last millennium is gratefully acknowledged, as is help from Judit Ries and Wayne Green in packing and transporting the plates.
Systematic Review of the Balance Error Scoring System
Bell, David R.; Guskiewicz, Kevin M.; Clark, Micheal A.; Padua, Darin A.
2011-01-01
Context: The Balance Error Scoring System (BESS) is commonly used by researchers and clinicians to evaluate balance.A growing number of studies are using the BESS as an outcome measure beyond the scope of its original purpose. Objective: To provide an objective systematic review of the reliability and validity of the BESS. Data Sources: PubMed and CINHAL were searched using Balance Error Scoring System from January 1999 through December 2010. Study Selection: Selection was based on establishment of the reliability and validity of the BESS. Research articles were selected if they established reliability or validity (criterion related or construct) of the BESS, were written in English, and used the BESS as an outcome measure. Abstracts were not considered. Results: Reliability of the total BESS score and individual stances ranged from poor to moderate to good, depending on the type of reliability assessed. The BESS has criterion-related validity with force plate measures; more difficult stances have higher agreement than do easier ones. The BESS is valid to detect balance deficits where large differences exist (concussion or fatigue). It may not be valid when differences are more subtle. Conclusions: Overall, the BESS has moderate to good reliability to assess static balance. Low levels of reliability have been reported by some authors. The BESS correlates with other measures of balance using testing devices. The BESS can detect balance deficits in participants with concussion and fatigue. BESS scores increase with age and with ankle instability and external ankle bracing. BESS scores improve after training. PMID:23016020
Systematic Errors in GNSS Radio Occultation Data - Part 2
NASA Astrophysics Data System (ADS)
Foelsche, Ulrich; Danzer, Julia; Scherllin-Pirscher, Barbara; Schwärz, Marc
2014-05-01
The Global Navigation Satellite System (GNSS) Radio Occultation (RO) technique has the potential to deliver climate benchmark measurements of the upper troposphere and lower stratosphere (UTLS), since RO data can be traced, in principle, to the international standard for the second. Climatologies derived from RO data from different satellites show indeed an amazing consistency of (better than 0.1 K). The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We have analyzed different potential error sources and present results on two of them. (1) If temperature is calculated from observed refractivity with the assumption that water vapor is zero, the product is called "dry temperature", which is commonly used to study the Earth's atmosphere, e.g., when analyzing temperature trends due to global warming. Dry temperature is a useful quantity, since it does not need additional background information in its retrieval. Concurrent trends in water vapor could, however, pretend false trends in dry temperature. We analyzed this effect, and identified the regions in the atmosphere, where it is safe to take dry temperature as a proxy for physical temperature. We found that the heights, where specified values of differences between dry and physical temperature are encountered, increase by about 150 m per decade, with little differences between all the 38 climate models under investigation. (2) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with temperature, pressure, and water vapor partial pressure. With the steadily increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows to compute sensitivities to changes in atmospheric composition, where we found that the effect of the CO2 increase is currently almost exactly balanced by the counteracting effect of the concurrent O2 decrease.
Parameter estimation and error analysis in environmental modeling and computation
NASA Technical Reports Server (NTRS)
Kalmaz, E. E.
1986-01-01
A method for the estimation of parameters and error analysis in the development of nonlinear modeling for environmental impact assessment studies is presented. The modular computer program can interactively fit different nonlinear models to the same set of data, dynamically changing the error structure associated with observed values. Parameter estimation techniques and sequential estimation algorithms employed in parameter identification and model selection are first discussed. Then, least-square parameter estimation procedures are formulated, utilizing differential or integrated equations, and are used to define a model for association of error with experimentally observed data.
Empirical State Error Covariance Matrix for Batch Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joe
2015-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.
Systematics for checking geometric errors in CNC lathes
NASA Astrophysics Data System (ADS)
AraÃºjo, R. P.; Rolim, T. L.
2015-10-01
Non-idealities presented in machine tools compromise directly both the geometry and the dimensions of machined parts, generating distortions in the project. Given the competitive scenario among different companies, it is necessary to have knowledge of the geometric behavior of these machines in order to be able to establish their processing capability, avoiding waste of time and materials as well as satisfying customer requirements. But despite the fact that geometric tests are important and necessary to clarify the use of the machine correctly, therefore preventing future damage, most users do not apply such tests on their machines for lack of knowledge or lack of proper motivation, basically due to two factors: long period of time and high costs of testing. This work proposes a systematics for checking straightness and perpendicularity errors in CNC lathes demanding little time and cost with high metrological reliability, to be used on factory floors of small and medium-size businesses to ensure the quality of its products and make them competitive.
Systematic Error in UAV-derived Topographic Models: The Importance of Control
NASA Astrophysics Data System (ADS)
James, M. R.; Robson, S.; d'Oleire-Oltmanns, S.
2014-12-01
UAVs equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs) for a wide variety of geoscience applications. Image processing and DEM-generation is being facilitated by parallel increases in the use of software based on 'structure from motion' algorithms. However, recent work [1] has demonstrated that image networks from UAVs, for which camera pointing directions are generally near-parallel, are susceptible to producing systematic error in the resulting topographic surfaces (a vertical 'doming'). This issue primarily reflects error in the camera lens distortion model, which is dominated by the radial K1 term. Common data processing scenarios, in which self-calibration is used to refine the camera model within the bundle adjustment, can inherently result in such systematic error via poor K1 estimates. Incorporating oblique imagery into such data sets can mitigate error by enabling more accurate calculation of camera parameters [1]. Here, using a combination of simulated image networks and real imagery collected from a fixed wing UAV, we explore the additional roles of external ground control and the precision of image measurements. We illustrate similarities and differences between a variety of structure from motion software, and underscore the importance of well distributed and suitably accurate control for projects where a demonstrated high accuracy is required. [1] James & Robson (2014) Earth Surf. Proc. Landforms, 39, 1413-1420, doi: 10.1002/esp.3609
NASA Astrophysics Data System (ADS)
Song, Ningfang; Li, Jiao; Li, Huipeng; Luo, Xinkai
2015-10-01
Periodic systematic error caused by erroneous reference phase adjustments and instabilities of interferometer has a great influence on precision of measurement micro-profile using white light phase-stepping interferometry. This paper presents a five-frame algorithm that is insensitive to periodic systematic error. This algorithm attempts to eliminate the periodic systematic error when calculating the phase. Both theoretical and experimental results show that the proposed algorithm has good immunity to periodic systematic error and is able to accurately recover the 3D profile of a sample.
A study of systematic errors in the PMD CamBoard nano
NASA Astrophysics Data System (ADS)
Chow, Jacky C. K.; Lichti, Derek D.
2013-04-01
Time-of-flight-based three-dimensional cameras are the state-of-the-art imaging modality for acquiring rapid 3D position information. Unlike any other technology on the market, it can deliver 2D images co-located with distance information at every pixel location, without any shadows. Recent technological advancements have begun miniaturizing such technology to be more suitable for laptops and eventually cellphones. This paper explores the systematic errors inherent to the new PMD CamBoard nano camera. As the world's most compact 3D time-of-flight camera it has applications in a wide domain, such as gesture control and facial recognition. To model the systematic errors, a one-step point-based and plane-based bundle adjustment method is used. It simultaneously estimates all systematic errors and unknown parameters by minimizing the residuals of image measurements, distance measurements, and amplitude measurements in a least-squares sense. The presented self-calibration method only requires a standard checkerboard target on a flat plane, making it a suitable candidate for on-site calibration. In addition, because distances are only constrained to lie on a plane, the raw pixel-by-pixel distance observations can be used. This makes it possible to increase the number of distance observations in the adjustment with ease. The results from this paper indicate that amplitude dependent range errors are the dominant error source for the nano under low scattering imaging configurations. Post user self-calibration, the RMSE of the range observations reduced by almost 50%, delivering range measurements at a precision of approximately 2.5cm within a 70cm interval.
Systematic vertical error in UAV-derived topographic models: Origins and solutions
NASA Astrophysics Data System (ADS)
James, Mike R.; Robson, Stuart
2014-05-01
Unmanned aerial vehicles (UAVs) equipped with consumer cameras are increasingly being used to produce high resolution digital elevation models (DEMs). However, although such DEMs may achieve centimetric detail, they can also display broad-scale systematic deformation (usually a vertical 'doming') that restricts their wider use. This effect can be particularly apparent in DEMs derived by structure-from-motion (SfM) processing, especially when control point data have not been incorporated in the bundle adjustment process. We illustrate that doming error results from a combination of inaccurate description of radial lens distortion and the use of imagery captured in near-parallel viewing directions. With such imagery, enabling camera self-calibration within the processing inherently leads to erroneous radial distortion values and associated DEM error. Using a simulation approach, we illustrate how existing understanding of systematic DEM error in stereo-pairs (from unaccounted radial distortion) up-scales in typical multiple-image blocks of UAV surveys. For image sets with dominantly parallel viewing directions, self-calibrating bundle adjustment (as normally used with images taken using consumer cameras) will not be able to derive radial lens distortion accurately, and will give associated systematic 'doming' DEM deformation. In the presence of image measurement noise (at levels characteristic of SfM software), and in the absence of control measurements, our simulations display domed deformation with amplitude of ~2 m over horizontal distances of ~100 m. We illustrate the sensitivity of this effect to variations in camera angle and flight height. Deformation will be reduced if suitable control points can be included within the bundle adjustment, but residual systematic vertical error may remain, accommodated by the estimated precision of the control measurements. Doming bias can be minimised by the inclusion of inclined images within the image set, for example, images collected during gently banked turns of a fixed-wing UAV or, if camera inclination can be altered, by just a few more oblique images with a rotor-based UAV. We provide practical flight plan solutions that, in the absence of control points, demonstrate a reduction in systematic DEM error by more than two orders of magnitude. DEM generation is subject to this effect whether a traditional photogrammetry or newer structure-from-motion (SfM) processing approach is used, but errors will be typically more pronounced in SfM-based DEMs, for which use of control measurements is often more limited. Although focussed on UAV surveying, our results are also relevant to ground-based image capture for SfM-based modelling.
Approaches to relativistic positioning around Earth and error estimations
NASA Astrophysics Data System (ADS)
Puchades, Neus; SÃ¡ez, Diego
2016-01-01
In the context of relativistic positioning, the coordinates of a given user may be calculated by using suitable information broadcast by a 4-tuple of satellites. Our 4-tuples belong to the Galileo constellation. Recently, we estimated the positioning errors due to uncertainties in the satellite world lines (U-errors). A distribution of U-errors was obtained, at various times, in a set of points covering a large region surrounding Earth. Here, the positioning errors associated to the simplifying assumption that photons move in Minkowski space-time (S-errors) are estimated and compared with the U-errors. Both errors have been calculated for the same points and times to make comparisons possible. For a certain realistic modeling of the world line uncertainties, the estimated S-errors have proved to be smaller than the U-errors, which shows that the approach based on the assumption that the Earth's gravitational field produces negligible effects on photons may be used in a large region surrounding Earth. The applicability of this approach - which simplifies numerical calculations - to positioning problems, and the usefulness of our S-error maps, are pointed out. A better approach, based on the assumption that photons move in the Schwarzschild space-time governed by an idealized Earth, is also analyzed. More accurate descriptions of photon propagation involving non symmetric space-time structures are not necessary for ordinary positioning and spacecraft navigation around Earth.
Errors in estimation of the input signal for integrate-and-fire neuronal models
NASA Astrophysics Data System (ADS)
Bibbona, Enrico; Lansky, Petr; Sacerdote, Laura; Sirovich, Roberta
2008-07-01
Estimation of the input parameters of stochastic (leaky) integrate-and-fire neuronal models is studied. It is shown that the presence of a firing threshold brings a systematic error to the estimation procedure. Analytical formulas for the bias are given for two models, the randomized random walk and the perfect integrator. For the third model considered, the leaky integrate-and-fire model, the study is performed by using Monte Carlo simulated trajectories. The bias is compared with other errors appearing during the estimation, and it is documented that the effect of the bias has to be taken into account in experimental studies.
Estimates of Random Error in Satellite Rainfall Averages
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.
2003-01-01
Satellite rain estimates are most accurate when obtained with microwave instruments on low earth-orbiting satellites. Estimation of daily or monthly total areal rainfall, typically of interest to hydrologists and climate researchers, is made difficult, however, by the relatively poor coverage generally available from such satellites. Intermittent coverage by the satellites leads to random "sampling error" in the satellite products. The inexact information about hydrometeors inferred from microwave data also leads to random "retrieval errors" in the rain estimates. In this talk we will review approaches to quantitative estimation of the sampling error in area/time averages of satellite rain retrievals using ground-based observations, and methods of estimating rms random error, both sampling and retrieval, in averages using satellite measurements themselves.
Systematic biases in human heading estimation.
Cuturi, Luigi F; MacNeilage, Paul R
2013-01-01
Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion. PMID:23457631
Application of Bayesian Systematic Error Correction to Kepler Photometry
NASA Astrophysics Data System (ADS)
Van Cleve, Jeffrey E.; Jenkins, J. M.; Twicken, J. D.; Smith, J. C.; Fanelli, M. N.
2011-01-01
In a companion talk (Jenkins et al.), we present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of intrinsically quiet and highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. In this poster, we illustrate the concept in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through June 2010. We define the correlation function between normalized, mean-removed light curves and select a subset of highly correlated stars. This ensemble of light curves can then be combined with ancillary engineering data and image motion polynomials to form a design matrix from which the principal components are extracted by reduced-rank SVD decomposition. MAP is then represented in the resulting orthonormal basis, and applied to the set of all light curves. We show that the correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. We then show the benefits of MAP applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability. After high-pass filtering the MAP output, we show that MAP does not increase noise on transit time scales, compared to LS. We conclude with a discussion of current work selecting input vectors for the design matrix, representing and numerically solving MAP for non-Gaussian probability distribution functions (PDFs), and suppressing high-frequency noise injection with Lagrange multipliers. Funding for this mission is provided by NASA, Science Mission Directorate.
NASA Astrophysics Data System (ADS)
Gourrion, J.; Guimbard, S.; Sabia, R.; Portabella, M.; Gonzalez, V.; Turiel, A.; Ballabrera, J.; Gabarro, C.; Perez, F.; Martinez, J.
2012-04-01
The Microwave Imaging Radiometer using Aperture Synthesis (MIRAS) instrument onboard the Soil Moisture and Ocean Salinity (SMOS) mission was launched on November 2nd, 2009 with the aim of providing, over the oceans, synoptic sea surface salinity (SSS) measurements with spatial and temporal coverage adequate for large-scale oceanographic studies. For each single satellite overpass, SSS is retrieved after collecting, at fixed ground locations, a series of brightness temperature from successive scenes corresponding to various geometrical and polarization conditions. SSS is inversed through minimization of the difference between reconstructed and modeled brightness temperatures. To meet the challenging mission requirements, retrieved SSS needs to accomplish an accuracy of 0.1 psu after averaging in a 10- or 30-day period and 2°x2° or 1°x1° spatial boxes, respectively. It is expected that, at such scales, the high radiometric noise can be reduced to a level such that remaining errors and inconsistencies in the retrieved salinity fields can essentially be related to (1) systematic brightness temperature errors in the antenna reference frame, (2) systematic errors in the Geophysical Model Function - GMF, used to model the observations and retrieve salinity - for specific environmental conditions and/or particular auxiliary parameter values and (3) errors in the auxiliary datasets used as input to the GMF. The present communication primarily aims at adressing above point 1 and possibly point 2 for the whole polarimetric information i.e. issued from both co-polar and cross-polar measurements. Several factors may potentially produce systematic errors in the antenna reference frame: the unavoidable fact that all antenna are not perfectly identical, the imperfect characterization of the instrument response e.g. antenna patterns, account for receiver temperatures in the reconstruction, calibration using flat sky scenes, implementation of ripple reduction algorithms at sharp boundaries such as the Sky-Earth boundary. Data acquired over the Ocean rather than over Land are prefered to characterize such errors because the variability of the emissivity sensed over the oceanic domain is an order of magnitude smaller than over land. Nevertheless, characterizing such errors over the Ocean is not a trivial task. Even if the natural variability is small, it is larger than the errors to be characterized and the characterization strategy must account for it otherwise the estimated patterns will unfortunately vary significantly with the selected dataset. The communication will present results on a systematic error characterization methodology allowing stable error pattern estimates. Particular focus will be given to the critical data selection strategy and the analysis of the X- and Y-pol patterns obtained over a wide range of SMOS subdatasets. Impact of some image reconstruction options will be evaluated. It will be shown how the methodology is also an interesting tool to diagnose specific error sources. Criticality of accurate description of Faraday rotation effects will be evidenced and latest results about the possibility to infer such information from full Stokes vector will be presented.
Nonparametric Item Response Curve Estimation with Correction for Measurement Error
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric or kernel regression estimation of item response curves (IRCs) is often used in item analysis in testing programs. These estimates are biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. Accuracy of this estimation is a concern theoretically and operationally.…
Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada
NASA Astrophysics Data System (ADS)
Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.
2015-08-01
Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.
Using doppler radar images to estimate aircraft navigational heading error
Doerry, Armin W.; Jordan, Jay D.; Kim, Theodore J.
2012-07-03
A yaw angle error of a motion measurement system carried on an aircraft for navigation is estimated from Doppler radar images captured using the aircraft. At least two radar pulses aimed at respectively different physical locations in a targeted area are transmitted from a radar antenna carried on the aircraft. At least two Doppler radar images that respectively correspond to the at least two transmitted radar pulses are produced. These images are used to produce an estimate of the yaw angle error.
Stress Recovery and Error Estimation for 3-D Shell Structures
NASA Technical Reports Server (NTRS)
Riggs, H. R.
2000-01-01
The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).
Multiple linear regression estimators with skew normal errors
NASA Astrophysics Data System (ADS)
Alhamide, A. A.; Ibrahim, K.; Alodat, M. T.
2015-09-01
The idea of skew normal distribution is suitable to be used for the analysis of data which is skewed. The purpose of this paper is to study the estimation of the regression parameters under the extended multivariate skew normal errors. The estimators for the regression parameters found based on the maximum likelihood method are derived. A simulation study is carried out to investigate the performance of the estimators derived and the standard errors associate with the respective parameters estimates are found to be quite small.
PERIOD ERROR ESTIMATION FOR THE KEPLER ECLIPSING BINARY CATALOG
Mighell, Kenneth J.; Plavchan, Peter
2013-06-15
The Kepler Eclipsing Binary Catalog (KEBC) describes 2165 eclipsing binaries identified in the 115 deg{sup 2} Kepler Field based on observations from Kepler quarters Q0, Q1, and Q2. The periods in the KEBC are given in units of days out to six decimal places but no period errors are provided. We present the PEC (Period Error Calculator) algorithm, which can be used to estimate the period errors of strictly periodic variables observed by the Kepler Mission. The PEC algorithm is based on propagation of error theory and assumes that observation of every light curve peak/minimum in a long time-series observation can be unambiguously identified. The PEC algorithm can be efficiently programmed using just a few lines of C computer language code. The PEC algorithm was used to develop a simple model that provides period error estimates for eclipsing binaries in the KEBC with periods less than 62.5 days: log {sigma}{sub P} Almost-Equal-To - 5.8908 + 1.4425(1 + log P), where P is the period of an eclipsing binary in the KEBC in units of days. KEBC systems with periods {>=}62.5 days have KEBC period errors of {approx}0.0144 days. Periods and period errors of seven eclipsing binary systems in the KEBC were measured using the NASA Exoplanet Archive Periodogram Service and compared to period errors estimated using the PEC algorithm.
NASA Astrophysics Data System (ADS)
Schumacher, Maike; Kusche, JÃ¼rgen; DÃ¶ll, Petra
2016-02-01
Recently, ensemble Kalman filters (EnKF) have found increasing application for merging hydrological models with total water storage anomaly (TWSA) fields from the Gravity Recovery And Climate Experiment (GRACE) satellite mission. Previous studies have disregarded the effect of spatially correlated errors of GRACE TWSA products in their investigations. Here, for the first time, we systematically assess the impact of the GRACE error correlation structure on EnKF data assimilation into a hydrological model, i.e. on estimated compartmental and total water storages and model parameter values. Our investigations include (1) assimilating gridded GRACE-derived TWSA into the WaterGAP Global Hydrology Model and, simultaneously, calibrating its parameters; (2) introducing GRACE observations on different spatial scales; (3) modelling observation errors as either spatially white or correlated in the assimilation procedure, and (4) replacing the standard EnKF algorithm by the square root analysis scheme or, alternatively, the singular evolutive interpolated Kalman filter. Results of a synthetic experiment designed for the Mississippi River Basin indicate that the hydrological parameters are sensitive to TWSA assimilation if spatial resolution of the observation data is sufficiently high. We find a significant influence of spatial error correlation on the adjusted water states and model parameters for all implemented filter variants, in particular for subbasins with a large discrepancy between observed and initially simulated TWSA and for north-south elongated sub-basins. Considering these correlated errors, however, does not generally improve results: while some metrics indicate that it is helpful to consider the full GRACE error covariance matrix, it appears to have an adverse effect on others. We conclude that considering the characteristics of GRACE error correlation is at least as important as the selection of the spatial discretisation of TWSA observations, while the choice of the filter method might rather be based on the computational simplicity and efficiency.
An Empirical State Error Covariance Matrix for Batch State Estimation
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
NASA Astrophysics Data System (ADS)
Jung, Jaehoon; Kim, Sangpil; Hong, Sungchul; Kim, Kyoungmin; Kim, Eunsook; Im, Jungho; Heo, Joon
2013-07-01
This paper suggested simulation approaches for quantifying and reducing the effects of National Forest Inventory (NFI) plot location error on aboveground forest biomass and carbon stock estimation using the k-Nearest Neighbor (kNN) algorithm. Additionally, the effects of plot location error in pre-GPS and GPS NFI plots were compared. Two South Korean cities, Sejong and Daejeon, were chosen to represent the study area, for which four Landsat TM images were collected together with two NFI datasets established in both the pre-GPS and GPS eras. The effects of plot location error were investigated in two ways: systematic error simulation, and random error simulation. Systematic error simulation was conducted to determine the effect of plot location error due to mis-registration. All of the NFI plots were successively moved against the satellite image in 360° directions, and the systematic error patterns were analyzed on the basis of the changes of the Root Mean Square Error (RMSE) of kNN estimation. In the random error simulation, the inherent random location errors in NFI plots were quantified by Monte Carlo simulation. After removal of both the estimated systematic and random location errors from the NFI plots, the RMSE% were reduced by 11.7% and 17.7% for the two pre-GPS-era datasets, and by 5.5% and 8.0% for the two GPS-era datasets. The experimental results showed that the pre-GPS NFI plots were more subject to plot location error than were the GPS NFI plots. This study's findings demonstrate a potential remedy for reducing NFI plot location errors which may improve the accuracy of carbon stock estimation in a practical manner, particularly in the case of pre-GPS NFI data.
Error decomposition and estimation of inherent optical properties.
Salama, Mhd Suhyb; Stein, Alfred
2009-09-10
We describe a methodology to quantify and separate the errors of inherent optical properties (IOPs) derived from ocean-color model inversion. Their total error is decomposed into three different sources, namely, model approximations and inversion, sensor noise, and atmospheric correction. Prior information on plausible ranges of observation, sensor noise, and inversion goodness-of-fit are employed to derive the posterior probability distribution of the IOPs. The relative contribution of each error component to the total error budget of the IOPs, all being of stochastic nature, is then quantified. The method is validated with the International Ocean Colour Coordinating Group (IOCCG) data set and the NASA bio-Optical Marine Algorithm Data set (NOMAD). The derived errors are close to the known values with correlation coefficients of 60-90% and 67-90% for IOCCG and NOMAD data sets, respectively. Model-induced errors inherent to the derived IOPs are between 10% and 57% of the total error, whereas atmospheric-induced errors are in general above 43% and up to 90% for both data sets. The proposed method is applied to synthesized and in situ measured populations of IOPs. The mean relative errors of the derived values are between 2% and 20%. A specific error table to the Medium Resolution Imaging Spectrometer (MERIS) sensor is constructed. It serves as a benchmark to evaluate the performance of the atmospheric correction method and to compute atmospheric-induced errors. Our method has a better performance and is more appropriate to estimate actual errors of ocean-color derived products than the previously suggested methods. Moreover, it is generic and can be applied to quantify the error of any derived biogeophysical parameter regardless of the used derivation. PMID:19745859
Adaptive Error Estimation in Linearized Ocean General Circulation Models
NASA Technical Reports Server (NTRS)
Chechelnitsky, Michael Y.
1999-01-01
Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Einaudi, Franco (Technical Monitor)
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have about 7am/7pm orbital geometry) and afternoon satellites (NOAA 7, 9, 11 and 14 that have about 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error. We find we can decrease the global temperature trend by about 0.07 K/decade. In addition there are systematic time dependent errors present in the data that are introduced by the drift in the satellite orbital geometry arises from the diurnal cycle in temperature which is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observations made in the MSU Ch 1 (50.3 GHz) support this approach. The error is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the errors on the global temperature trend. In one path the entire error is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by about 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+-) 0.04 K/decade during 1980 to 1998.
Global Warming Estimation from MSU: Correction for Drift and Calibration Errors
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.
2000-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz), made in the nadir direction from sequential, sun-synchronous, polar-orbiting NOAA morning satellites (NOAA 6, 10 and 12 that have approximately 7am/7pm orbital geometry) and. afternoon satellites (NOAA 7, 9, 11 and 14 that have approximately 2am/2pm orbital geometry) are analyzed in this study to derive global temperature trend from 1980 to 1998. In order to remove the discontinuities between the data of the successive satellites and to get a continuous time series, first we have used shortest possible time record of each satellite. In this way we get a preliminary estimate of the global temperature trend of 0.21 K/decade. However, this estimate is affected by systematic time-dependent errors. One such error is the instrument calibration error eo. This error can be inferred whenever there are overlapping measurements made by two satellites over an extended period of time. From the available successive satellite data we have taken the longest possible time record of each satellite to form the time series during the period 1980 to 1998 to this error eo. We find eo can decrease the global temperature trend by approximately 0.07 K/decade. In addition there are systematic time dependent errors ed and ec present in the data that are introduced by the drift in the satellite orbital geometry. ed arises from the diurnal cycle in temperature and ec is the drift related change in the calibration of the MSU. In order to analyze the nature of these drift related errors the multi-satellite Ch 2 data set is partitioned into am and pm subsets to create two independent time series. The error ed can be assessed in the am and pm data of Ch 2 on land and can be eliminated. Observation made in the MSU Ch 1 (50.3 GHz) support this approach. The error ec is obvious only in the difference between the pm and am observations of Ch 2 over the ocean. We have followed two different paths to assess the impact of the error ec on the global temperature trend. In one path the entire error ec is placed in the am data while in the other it is placed in the pm data. Global temperature trend is increased or decreased by approximately 0.03 K/decade depending upon this placement. Taking into account all random errors and systematic errors our analysis of MSU observations leads us to conclude that a conservative estimate of the global warming is 0. 11 (+/-) 0.04 K/decade during 1980 to 1998.
MONTE CARLO ERROR ESTIMATION APPLIED TO NONDESTRUCTIVE ASSAY METHODS
R. ESTEP; ET AL
2000-06-01
Monte Carlo randomization of nuclear counting data into N replicate sets is the basis of a simple and effective method for estimating error propagation through complex analysis algorithms such as those using neural networks or tomographic image reconstructions. The error distributions of properly simulated replicate data sets mimic those of actual replicate measurements and can be used to estimate the std. dev. for an assay along with other statistical quantities. We have used this technique to estimate the standard deviation in radionuclide masses determined using the tomographic gamma scanner (TGS) and combined thermal/epithermal neutron (CTEN) methods. The effectiveness of this approach is demonstrated by a comparison of our Monte Carlo error estimates with the error distributions in actual replicate measurements and simulations of measurements. We found that the std. dev. estimated this way quickly converges to an accurate value on average and has a predictable error distribution similar to N actual repeat measurements. The main drawback of the Monte Carlo method is that N additional analyses of the data are required, which may be prohibitively time consuming with slow analysis algorithms.
NASA Technical Reports Server (NTRS)
Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.
2003-01-01
We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.
Verification of unfold error estimates in the unfold operator code
NASA Astrophysics Data System (ADS)
Fehl, D. L.; Biggs, F.
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums.
Verification of unfold error estimates in the unfold operator code
Fehl, D.L.; Biggs, F.
1997-01-01
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Gutierrez, Mauricio; Brown, Kenneth
2015-03-01
Classical simulations of noisy stabilizer circuits are often used to estimate the threshold of a quantum error-correcting code (QECC). It is common to model the noise as a depolarizing Pauli channel. However, it is not clear how sensitive a code's threshold is to the noise model, and whether or not a depolarizing channel is a good approximation for realistic errors. We have shown that, at the physical single-qubit level, efficient and more accurate approximations can be obtained. We now examine the feasibility of employing these approximations to obtain better estimates of a QECC's threshold. We calculate the level-1 pseudo-threshold for the Steane [[7,1,3
Analysis of possible systematic errors in the Oslo method
Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.
2011-03-15
In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and {gamma}-ray transmission coefficient from a set of particle-{gamma} coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.
Application of variance components estimation to calibrate geoid error models.
Guo, Dong-Mei; Xu, Hou-Ze
2015-01-01
The method of using Global Positioning System-leveling data to obtain orthometric heights has been well studied. A simple formulation for the weighted least squares problem has been presented in an earlier work. This formulation allows one directly employing the errors-in-variables models which completely descript the covariance matrices of the observables. However, an important question that what accuracy level can be achieved has not yet to be satisfactorily solved by this traditional formulation. One of the main reasons for this is the incorrectness of the stochastic models in the adjustment, which in turn allows improving the stochastic models of measurement noises. Therefore the issue of determining the stochastic modeling of observables in the combined adjustment with heterogeneous height types will be a main focus point in this paper. Firstly, the well-known method of variance component estimation is employed to calibrate the errors of heterogeneous height data in a combined least square adjustment of ellipsoidal, orthometric and gravimetric geoid. Specifically, the iterative algorithms of minimum norm quadratic unbiased estimation are used to estimate the variance components for each of heterogeneous observations. Secondly, two different statistical models are presented to illustrate the theory. The first method directly uses the errors-in-variables as a priori covariance matrices and the second method analyzes the biases of variance components and then proposes bias-corrected variance component estimators. Several numerical test results show the capability and effectiveness of the variance components estimation procedure in combined adjustment for calibrating geoid error model. PMID:26306296
Estimating errors in IceBridge freeboard at ICESat Scales
NASA Astrophysics Data System (ADS)
Prado, D. W.; Xie, H.; Ackley, S. F.; Wang, X.
2014-12-01
The Airborne Topographic Mapping (ATM) system flown on NASA Operation IceBridge allows for estimation of sea ice thickness from surface elevations in the Bellingshausen - Amundsen Seas. The estimation of total freeboard is based on the accuracy of local sea level estimations and the footprint size. We used the high density of ATM L1B (~1 m footprint) observations at varying spatial resolutions to assess errors associated with averaging over larger footprints and deviation of local sea level from the WGS-84 geoid over longer segment lengths The ATM data sets allow for a comparison between IceBridge (2009-2014) and ICESat(2003-2009)derived freeboards by comparing the ATM L2 (~70 m footprint) data, similar to the IceSAT footprint. While The average freeboard estimates for the L2 data in 2009 underestimate total freeboard by only 5 cm at 5 km segment lengths the error increases to 49 cm at the 50 km segment lengths typical of IceSAT analyses. Since the error in freeboard estimation greatly increases at the segment lengths used for IceSAT analyses, some caution may be required in comparing IceSAT thickness estimates with later IceBridge estimates over the same region.
Error estimation for the linearized auto-localization algorithm.
Guevara, Jorge; JimÃ©nez, Antonio R; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons' positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter Ï„ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
Error Estimation for the Linearized Auto-Localization Algorithm
Guevara, Jorge; JimÃ©nez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando
2012-01-01
The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beaconsâ€™ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter Ï„ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965
Galaxy Cluster Shapes and Systematic Errors in H_0 as Determined by the Sunyaev-Zel'dovich Effect
NASA Technical Reports Server (NTRS)
Sulkanen, Martin E.; Patel, Sandeep K.
1998-01-01
Imaging of the Sunyaev-Zeldovich (SZ) effect in galaxy clusters combined with cluster plasma x-ray diagnostics promises to measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic error's in the Hubble constant, H_0, because the true shape of the cluster is not known. In this paper we present a study of the systematic errors in the value of H_0, as determined by the x-ray and SZ properties of theoretical samples of triaxial isothermal "beta-model" clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. We calculate three estimates for H_0 for each cluster, based on their large and small apparent angular core radii, and their arithmetic mean. We average the estimates for H_0 for a sample of 25 clusters and find that the estimates have limited systematic error: the 99.7% confidence intervals for the mean estimated H_0 analyzing the clusters using either their large or mean angular core r;dius are within 14% of the "true" (assumed) value of H_0 (and enclose it), for a triaxial beta model cluster sample possessing a distribution of apparent x-ray cluster ellipticities consistent with that of observed x-ray clusters.
Real-Time Estimation Of Aiming Error Of Spinning Antenna
NASA Technical Reports Server (NTRS)
Dolinsky, Shlomo
1992-01-01
Spinning-spacecraft dynamics and amplitude variations in communications links studied from received-signal fluctuations. Mathematical model and associated analysis procedure provide real-time estimates of aiming error of remote rotating transmitting antenna radiating constant power in narrow, pencillike beam from spinning platform, and current amplitude of received signal. Estimates useful in analyzing and enhancing calibration of communication system, and in analyzing complicated dynamic effects in spinning platform and antenna-aiming mechanism.
Note: Statistical errors estimation for Thomson scattering diagnostics
Maslov, M.; Beurskens, M. N. A.; Flanagan, J.; Kempenaars, M.; Collaboration: JET-EFDA Contributors
2012-09-15
A practical way of estimating statistical errors of a Thomson scattering diagnostic measuring plasma electron temperature and density is described. Analytically derived expressions are successfully tested with Monte Carlo simulations and implemented in an automatic data processing code of the JET LIDAR diagnostic.
Error analysis for the Fourier domain offset estimation algorithm
NASA Astrophysics Data System (ADS)
Wei, Ling; He, Jieling; He, Yi; Yang, Jinsheng; Li, Xiqi; Shi, Guohua; Zhang, Yudong
2016-02-01
The offset estimation algorithm is crucial for the accuracy of the Shack-Hartmann wave-front sensor. Recently, the Fourier Domain Offset (FDO) algorithm has been proposed for offset estimation. Similar to other algorithms, the accuracy of FDO is affected by noise such as background noise, photon noise, and 'fake' spots. However, no adequate quantitative error analysis has been performed for FDO in previous studies, which is of great importance for practical applications of the FDO. In this study, we quantitatively analysed how the estimation error of FDO is affected by noise based on theoretical deduction, numerical simulation, and experiments. The results demonstrate that the standard deviation of the wobbling error is: (1) inversely proportional to the raw signal to noise ratio, and proportional to the square of the sub-aperture size in the presence of background noise; and (2) proportional to the square root of the intensity in the presence of photonic noise. Furthermore, the upper bound of the estimation error is proportional to the intensity of 'fake' spots and the sub-aperture size. The results of the simulation and experiments agreed with the theoretical analysis.
Concise Formulas for the Standard Errors of Component Loading Estimates.
ERIC Educational Resources Information Center
Ogasawara, Haruhiko
2002-01-01
Derived formulas for the asymptotic standard errors of component loading estimates to cover the cases of principal component analysis for unstandardized and standardized variables with orthogonal and oblique rotations. Used the formulas with a real correlation matrix of 355 subjects who took 12 psychological tests. (SLD)
Bootstrap Standard Error Estimates in Dynamic Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Browne, Michael W.
2010-01-01
Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…
Condition and Error Estimates in Numerical Matrix Computations
Konstantinov, M. M.; Petkov, P. H.
2008-10-30
This tutorial paper deals with sensitivity and error estimates in matrix computational processes. The main factors determining the accuracy of the result computed in floating--point machine arithmetics are considered. Special attention is paid to the perturbation analysis of matrix algebraic equations and unitary matrix decompositions.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2008-03-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Estimating Filtering Errors Using the Peano Kernel Theorem
Jerome Blair
2009-02-20
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise.
Error estimates for universal back-projection-based photoacoustic tomography
NASA Astrophysics Data System (ADS)
Pandey, Prabodh K.; Naik, Naren; Munshi, Prabhat; Pradhan, Asima
2015-07-01
Photo-acoustic tomography is a hybrid imaging modality that combines the advantages of optical as well as ultrasound imaging techniques to produce images with high resolution and good contrast at high penetration depths. Choice of reconstruction algorithm as well as experimental and computational parameters plays a major role in governing the accuracy of a tomographic technique. Therefore error estimates with the variation of these parameters have extreme importance. Due to the finite support, that photo-acoustic source has, the pressure signals are not band-limited, but in practice, our detection system is. Hence the reconstructed image from ideal, noiseless band-limited forward data (for future references we will call this band-limited reconstruction) is the best approximation that we have for the unknown object. In the present study, we report the error that arises in the universal back-projection (UBP) based photo-acoustic reconstruction for planer detection geometry due to sampling and filtering of forward data (pressure signals).Computational validation of the error estimates have been carried out for synthetic phantoms. Validation with noisy forward data has also been carried out, to study the effect of noise on the error estimates derived in our work. Although here we have derived the estimates for planar detection geometry, the derivations for spherical and cylindrical geometries follow accordingly.
NASA Technical Reports Server (NTRS)
Davis, J. L.; Herring, T. A.; Shapiro, I. I.; Rogers, A. E. E.; Elgered, G.
1985-01-01
Analysis of very long baseline interferometry data indicates that systematic errors in prior estimates of baseline length, of order 5 cm for approximately 8000-km baselines, were due primarily to mismodeling of the electrical path length of the troposphere and mesosphere ('atmospheric delay'). Here observational evidence for the existence of such errors in the previously used models for the atmospheric delay is discussed, and a new 'mapping' function for the elevation angle dependence of this delay is developed. The delay predicted by this new mapping function differs from ray trace results by less than approximately 5 mm, at all elevations down to 5 deg elevation, and introduces errors into the estimates of baseline length of less than about 1 cm, for the multistation intercontinental experiment analyzed here.
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2013-10-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between two year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
NASA Astrophysics Data System (ADS)
Houweling, S.; Krol, M.; Bergamaschi, P.; Frankenberg, C.; Dlugokencky, E. J.; Morino, I.; Notholt, J.; Sherlock, V.; Wunch, D.; Beck, V.; Gerbig, C.; Chen, H.; Kort, E. A.; Röckmann, T.; Aben, I.
2014-04-01
This study investigates the use of total column CH4 (XCH4) retrievals from the SCIAMACHY satellite instrument for quantifying large-scale emissions of methane. A unique data set from SCIAMACHY is available spanning almost a decade of measurements, covering a period when the global CH4 growth rate showed a marked transition from stable to increasing mixing ratios. The TM5 4DVAR inverse modelling system has been used to infer CH4 emissions from a combination of satellite and surface measurements for the period 2003-2010. In contrast to earlier inverse modelling studies, the SCIAMACHY retrievals have been corrected for systematic errors using the TCCON network of ground-based Fourier transform spectrometers. The aim is to further investigate the role of bias correction of satellite data in inversions. Methods for bias correction are discussed, and the sensitivity of the optimized emissions to alternative bias correction functions is quantified. It is found that the use of SCIAMACHY retrievals in TM5 4DVAR increases the estimated inter-annual variability of large-scale fluxes by 22% compared with the use of only surface observations. The difference in global methane emissions between 2-year periods before and after July 2006 is estimated at 27-35 Tg yr-1. The use of SCIAMACHY retrievals causes a shift in the emissions from the extra-tropics to the tropics of 50 ± 25 Tg yr-1. The large uncertainty in this value arises from the uncertainty in the bias correction functions. Using measurements from the HIPPO and BARCA aircraft campaigns, we show that systematic errors in the SCIAMACHY measurements are a main factor limiting the performance of the inversions. To further constrain tropical emissions of methane using current and future satellite missions, extended validation capabilities in the tropics are of critical importance.
Error estimates and specification parameters for functional renormalization
Schnoerr, David; Boettcher, Igor; Pawlowski, Jan M.; ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt ; Wetterich, Christof
2013-07-15
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
An Anisotropic A posteriori Error Estimator for CFD
NASA Astrophysics Data System (ADS)
Feijóo, Raúl A.; Padra, Claudio; Quintana, Fernando
In this article, a robust anisotropic adaptive algorithm is presented, to solve compressible-flow equations using a stabilized CFD solver and automatic mesh generators. The association includes a mesh generator, a flow solver, and an a posteriori error-estimator code. The estimator was selected among several choices available (Almeida et al. (2000). Comput. Methods Appl. Mech. Engng, 182, 379-400; Borges et al. (1998). "Computational mechanics: new trends and applications". Proceedings of the 4th World Congress on Computational Mechanics, Bs.As., Argentina) giving a powerful computational tool. The main aim is to capture solution discontinuities, in this case, shocks, using the least amount of computational resources, i.e. elements, compatible with a solution of good quality. This leads to high aspect-ratio elements (stretching). To achieve this, a directional error estimator was specifically selected. The numerical results show good behavior of the error estimator, resulting in strongly-adapted meshes in few steps, typically three or four iterations, enough to capture shocks using a moderate and well-distributed amount of elements.
Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics
NASA Technical Reports Server (NTRS)
Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.
Test models for improving filtering with model errors through stochastic parameter estimation
Gershgorin, B.; Harlim, J. Majda, A.J.
2010-01-01
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.
2013-01-01
Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).
Second-order systematic errors in Mueller matrix dual rotating compensator ellipsometry.
Broch, Laurent; En Naciri, Aotmane; Johann, Luc
2010-06-10
We investigate the systematic errors at the second order for a Mueller matrix ellipsometer in the dual rotating compensator configuration. Starting from a general formalism, we derive explicit second-order errors in the Mueller matrix coefficients of a given sample. We present the errors caused by the azimuthal inaccuracy of the optical components and their influences on the measurements. We demonstrate that the methods based on four-zone or two-zone averaging measurement are effective to vanish the errors due to the compensators. For the other elements, it is shown that the systematic errors at the second order can be canceled only for some coefficients of the Mueller matrix. The calibration step for the analyzer and the polarizer is developed. This important step is necessary to avoid the azimuthal inaccuracy in such elements. Numerical simulations and experimental measurements are presented and discussed. PMID:20539341
Adjustment of systematic errors in ALS data through surface matching
NASA Astrophysics Data System (ADS)
Kumari, Pravesh; Carter, William E.; Shrestha, Ramesh L.
2011-05-01
Surface matching is a well researched topic in both Computer Vision (CV) and terrestrial laser scanning (TLS) or ground based light detection and ranging (LiDAR), but the extent of the range images derived from these technologies is typically orders of magnitude smaller than those derived from airborne laser scanning (ALS), also known as airborne LiDAR. Iterative closest point (ICP) and its variants have been successfully used to align and register multiple overlapping views of the range images for CV and TLS applications. However, many challenges are encountered in applying the ICP approach to ALS data sets. In this paper, we address these issues, explore the possibility of automating the algorithm, and present a technique to adjust systematic discrepancies in overlapping strips, using geometrical attributes in a given terrain. In this method, the ALS point samples used in the algorithm are selected depending on their ability to constrain the relative movement between the overlapping laser strips. The points from overlapping strips are matched through modified point to plane based on the ICP method.
Drug treatment of inborn errors of metabolism: a systematic review
Alfadhel, Majid; Al-Thihli, Khalid; Moubayed, Hiba; Eyaid, Wafaa; Al-Jeraisy, Majed
2013-01-01
Background The treatment of inborn errors of metabolism (IEM) has seen significant advances over the last decade. Many medicines have been developed and the survival rates of some patients with IEM have improved. Dosages of drugs used for the treatment of various IEM can be obtained from a range of sources but tend to vary among these sources. Moreover, the published dosages are not usually supported by the level of existing evidence, and they are commonly based on personal experience. Methods A literature search was conducted to identify key material published in English in relation to the dosages of medicines used for specific IEM. Textbooks, peer reviewed articles, papers and other journal items were identified. The PubMed and Embase databases were searched for material published since 1947 and 1974, respectively. The medications found and their respective dosages were graded according to their level of evidence, using the grading system of the Oxford Centre for Evidence-Based Medicine. Results 83 medicines used in various IEM were identified. The dosages of 17 medications (21%) had grade 1 level of evidence, 61 (74%) had grade 4, two medications were in level 2 and 3 respectively, and three had grade 5. Conclusions To the best of our knowledge, this is the first review to address this matter and the authors hope that it will serve as a quickly accessible reference for medications used in this important clinical field. PMID:23532493
Divergent estimation error in portfolio optimization and in linear regression
NASA Astrophysics Data System (ADS)
Kondor, I.; Varga-Haszonits, I.
2008-08-01
The problem of estimation error in portfolio optimization is discussed, in the limit where the portfolio size N and the sample size T go to infinity such that their ratio is fixed. The estimation error strongly depends on the ratio N/T and diverges for a critical value of this parameter. This divergence is the manifestation of an algorithmic phase transition, it is accompanied by a number of critical phenomena, and displays universality. As the structure of a large number of multidimensional regression and modelling problems is very similar to portfolio optimization, the scope of the above observations extends far beyond finance, and covers a large number of problems in operations research, machine learning, bioinformatics, medical science, economics, and technology.
GPS/DR Error Estimation for Autonomous Vehicle Localization
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
GPS/DR Error Estimation for Autonomous Vehicle Localization.
Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In
2015-01-01
Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
Analysis of systematic error in â€œbead methodâ€ measurements of meteorite bulk volume and density
NASA Astrophysics Data System (ADS)
Macke S. J., Robert J.; Britt, Daniel T.; Consolmagno S. J., Guy J.
2010-02-01
The Archimedean glass bead method for determining meteorite bulk density has become widely applied. We used well characterized, zero-porosity quartz and topaz samples to determine the systematic error in the glass bead method to support bulk density measurements of meteorites for our ongoing meteorite survey. Systematic error varies according to bead size, container size and settling method, but in all cases is less than 3%, and generally less than 2%. While measurements using larger containers (above 150 cm 3) exhibit no discernible systematic error but much reduced precision, higher precision measurements with smaller containers do exhibit systematic error. For a 77 cm 3 container using 40-80 Î¼m diameter beads, the systematic error is effectively eliminated within measurement uncertainties when a "secured shake" settling method is employed in which the container is held securely to the shake platform during a 5 s period of vigorous shaking. For larger 700-800 Î¼m diameter beads using the same method, bulk volumes are uniformly overestimated by 2%. Other settling methods exhibit sample-volume-dependent biases. For all methods, reliability of measurement is severely reduced for samples below Ëœ5 cm 3 (10-15 g for typical meteorites), providing a lower-limit selection criterion for measurement of meteoritical samples.
Xiong, Kun; Jiang, Jie
2015-01-01
Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920
Xiong, Kun; Jiang, Jie
2015-01-01
Compared with traditional star trackers, intensified high-accuracy star trackers equipped with an image intensifier exhibit overwhelmingly superior dynamic performance. However, the multiple-fiber-optic faceplate structure in the image intensifier complicates the optoelectronic detecting system of star trackers and may cause considerable systematic centroid errors and poor attitude accuracy. All the sources of systematic centroid errors related to fiber optic faceplates (FOFPs) throughout the detection process of the optoelectronic system were analyzed. Based on the general expression of the systematic centroid error deduced in the frequency domain and the FOFP modulation transfer function, an accurate expression that described the systematic centroid error of FOFPs was obtained. Furthermore, reduction of the systematic error between the optical lens and the input FOFP of the intensifier, the one among multiple FOFPs and the one between the output FOFP of the intensifier and the imaging chip of the detecting system were discussed. Two important parametric constraints were acquired from the analysis. The correctness of the analysis on the optoelectronic detecting system was demonstrated through simulation and experiment. PMID:26016920
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2012-01-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach ?. PMID:24027379
Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.
Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit
2013-08-01
In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach ?. PMID:24027379
SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors
Kathuria, K; Siebers, J
2014-06-01
Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and are as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.
The nature of the systematic radiometric error in the MGS TES spectra
NASA Astrophysics Data System (ADS)
Pankine, Alexey A.
2015-05-01
Several systematic radiometric errors are known to affect the data collected by the Thermal Emission Spectrometer (TES) onboard Mars Global Surveyor (MGS). The time-varying wavenumber dependent error that significantly increased in magnitude as the MGS mission progressed is discussed in detail. This error mostly affects spectra of cold (nighttime and polar caps) surfaces and atmospheric spectra in limb viewing geometry. It is proposed here that the source of the radiometric error is a periodic sampling error of the TES interferograms. A simple model of the error is developed that allows predicting its spectral shape for any viewing geometry based on the observed uncalibrated spectrum. Comparison of the radiometric errors observed in the TES spaceviews and those predicted by the model shows an excellent agreement. Spectral shapes of the errors for nadir and limb spectra are simulated based on representative TES spectra. In nighttime and limb spectra, and in spectra of cold polar regions, these radiometric errors can result in an error of Â±3-5 K in the retrieved atmospheric and surface temperatures, and significant errors in retrieved opacities of atmospheric aerosols. The model of the TES radiometric error presented here can be used to improve the accuracy of the TES retrievals and increase scientific return from the MGS mission.
NASA Astrophysics Data System (ADS)
Wang, Jie; Liang, Xingdong; Chen, Longyong; Ding, Chibiao
2015-01-01
Orthogonal frequency division multiplexing (OFDM) chirp waveform, which is composed of two successive identical linear frequency modulated subpulses, is a newly proposed orthogonal waveform scheme for multiinput multioutput synthetic aperture radar (SAR) systems. However, according to the waveform model, radar systematic error, which introduces phase or amplitude difference between the subpulses of the OFDM waveform, significantly degrades the orthogonality. The impact of radar systematic error on the waveform orthogonality is mainly caused by the systematic nonlinearity rather than the thermal noise or the frequency-dependent systematic error. Due to the influence of the causal filters, the first subpulse leaks into the second one. The leaked signal interacts with the second subpulse in the nonlinear components of the transmitter. This interaction renders a dramatic phase distortion in the beginning of the second subpulse. The resultant distortion, which leads to a phase difference between the subpulses, seriously damages the waveform's orthogonality. The impact of radar systematic error on the waveform orthogonality is addressed. Moreover, the impact of the systematic nonlinearity on the waveform is avoided by adding a standby between the subpulses. Theoretical analysis is validated by practical experiments based on a C-band SAR system.
SYSTEMATIC ERROR REDUCTION: NON-TILTED REFERENCE BEAM METHOD FOR LONG TRACE PROFILER.
QIAN,S.; QIAN, K.; HONG, Y.; SENG, L.; HO, T.; TAKACS, P.
2007-08-25
Systematic error in the Long Trace Profiler (LTP) has become the major error source as measurement accuracy enters the nanoradian and nanometer regime. Great efforts have been made to reduce the systematic error at a number of synchrotron radiation laboratories around the world. Generally, the LTP reference beam has to be tilted away from the optical axis in order to avoid fringe overlap between the sample and reference beams. However, a tilted reference beam will result in considerable systematic error due to optical system imperfections, which is difficult to correct. Six methods of implementing a non-tilted reference beam in the LTP are introduced: (1) application of an external precision angle device to measure and remove slide pitch error without a reference beam, (2) independent slide pitch test by use of not tilted reference beam, (3) non-tilted reference test combined with tilted sample, (4) penta-prism scanning mode without a reference beam correction, (5) non-tilted reference using a second optical head, and (6) alternate switching of data acquisition between the sample and reference beams. With a non-tilted reference method, the measurement accuracy can be improved significantly. Some measurement results are presented. Systematic error in the sample beam arm is not addressed in this paper and should be treated separately.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Kassabian, Nazelie; Presti, Letizia Lo; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
Kassabian, Nazelie; Lo Presti, Letizia; Rispoli, Francesco
2014-01-01
Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS) signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE) algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs). This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs) distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold. PMID:24922454
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation over Oceans
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, Max J.; Bacmeister, Julio T.; Chen, Baode; Takacs, Lawrence L.
2006-01-01
This study provides explanations for some of the experimental findings of Chao (2000) and Chao and Chen (2001) concerning the mechanisms responsible for the ITCZ in an aqua-planet model. These explanations are then applied to explain the origin of some of the systematic errors in the GCM simulation of ITCZ precipitatin over oceans. The ITCZ systematic errors are highly sensitive to model physics and by extension model horizontal resolution. The findings in this study along with those of Chao (2000) and Chao and Chen (2001, 2004) contribute to building a theoretical foundation for ITCZ study. A few possible methods of alleviating the systematic errors in the GCM simulaiton of ITCZ are discussed. This study uses a recent version of the Goddard Modeling and Assimilation Office's Goddard Earth Observing System (GEOS-5) GCM.
CADNA: a library for estimating round-off error propagation
NASA Astrophysics Data System (ADS)
Jézéquel, Fabienne; Chesneaux, Jean-Marie
2008-06-01
The CADNA library enables one to estimate round-off error propagation using a probabilistic approach. With CADNA the numerical quality of any simulation program can be controlled. Furthermore by detecting all the instabilities which may occur at run time, a numerical debugging of the user code can be performed. CADNA provides new numerical types on which round-off errors can be estimated. Slight modifications are required to control a code with CADNA, mainly changes in variable declarations, input and output. This paper describes the features of the CADNA library and shows how to interpret the information it provides concerning round-off error propagation in a code. Program summaryProgram title:CADNA Catalogue identifier:AEAT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAT_v1_0.html Program obtainable from:CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions:Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.:53 420 No. of bytes in distributed program, including test data, etc.:566 495 Distribution format:tar.gz Programming language:Fortran Computer:PC running LINUX with an i686 or an ia64 processor, UNIX workstations including SUN, IBM Operating system:LINUX, UNIX Classification:4.14, 6.5, 20 Nature of problem:A simulation program which uses floating-point arithmetic generates round-off errors, due to the rounding performed at each assignment and at each arithmetic operation. Round-off error propagation may invalidate the result of a program. The CADNA library enables one to estimate round-off error propagation in any simulation program and to detect all numerical instabilities that may occur at run time. Solution method:The CADNA library [1] implements Discrete Stochastic Arithmetic [2-4] which is based on a probabilistic model of round-off errors. The program is run several times with a random rounding mode generating different results each time. From this set of results, CADNA estimates the number of exact significant digits in the result that would have been computed with standard floating-point arithmetic. Restrictions:CADNA requires a Fortran 90 (or newer) compiler. In the program to be linked with the CADNA library, round-off errors on complex variables cannot be estimated. Furthermore array functions such as product or sum must not be used. Only the arithmetic operators and the abs, min, max and sqrt functions can be used for arrays. Running time:The version of a code which uses CADNA runs at least three times slower than its floating-point version. This cost depends on the computer architecture and can be higher if the detection of numerical instabilities is enabled. In this case, the cost may be related to the number of instabilities detected. References:The CADNA library, URL address: http://www.lip6.fr/cadna. J.-M. Chesneaux, L'arithmétique Stochastique et le Logiciel CADNA, Habilitation á diriger des recherches, Université Pierre et Marie Curie, Paris, 1995. J. Vignes, A stochastic arithmetic for reliable scientific computation, Math. Comput. Simulation 35 (1993) 233-261. J. Vignes, Discrete stochastic arithmetic for validating results of numerical software, Numer. Algorithms 37 (2004) 377-390.
Local error estimates for discontinuous solutions of nonlinear hyperbolic equations
NASA Technical Reports Server (NTRS)
Tadmor, Eitan
1989-01-01
Let u(x,t) be the possibly discontinuous entropy solution of a nonlinear scalar conservation law with smooth initial data. Suppose u sub epsilon(x,t) is the solution of an approximate viscosity regularization, where epsilon greater than 0 is the small viscosity amplitude. It is shown that by post-processing the small viscosity approximation u sub epsilon, pointwise values of u and its derivatives can be recovered with an error as close to epsilon as desired. The analysis relies on the adjoint problem of the forward error equation, which in this case amounts to a backward linear transport with discontinuous coefficients. The novelty of this approach is to use a (generalized) E-condition of the forward problem in order to deduce a W(exp 1,infinity) energy estimate for the discontinuous backward transport equation; this, in turn, leads one to an epsilon-uniform estimate on moments of the error u(sub epsilon) - u. This approach does not follow the characteristics and, therefore, applies mutatis mutandis to other approximate solutions such as E-difference schemes.
Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao
2016-02-01
The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 ?m using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius. PMID:26931894
Lamoreaux, Steve; Wong, Douglas
2015-06-01
The basic theory of temporal mechanical fluctuation induced systematic errors in Casimir force experiments is developed and applications of this theory to several experiments is reviewed. This class of systematic error enters in a manner similar to the usual surface roughness correction, but unlike the treatment of surface roughness for which an exact result requires an electromagnetic mode analysis, time dependent fluctuations can be treated exactly, assuming the fluctuation times are much longer than the zero point and thermal fluctuation correlation times of the electromagnetic field between the plates. An experimental method for measuring absolute distance with high bandwidth is also described and measurement data presented. PMID:25965319
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.
2006-01-01
Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.
Wu Yan; Shannon, Mark A.
2006-04-15
The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed.
Non-Systematic Errors of Monthly Oceanic Rainfall Derived From TMI
NASA Technical Reports Server (NTRS)
Chiu, Long S.; Chang, Alfred T.-C.
2000-01-01
A major objective of the Tropical Rainfall Measuring Mission (TRMM) is to produce a multi-year time series of monthly rainfall over 50 latitude by 50 longitude boxes with an uncertainty of 1 mm/day for low rain rates and 10% for high rain rates. Based on some simple assumptions about the error structure, we compute the non-systematic errors of monthly oceanic rainfall over the same space/time domain derived from data taken by the Special Sensor Microwave Imager (SSM/I) on board the Defense Meteorological Satellite Program (DMSP) satellites and TRMM Microwave Imager (TMI). The mean rain rates over a two-year period (1998-1999) are calculated to be 3.0, 2.85, 2.94 mm/day for SSM/I onboard the DMSP F-13, F-14 and TMI, respectively. Assuming that the non-systematic errors for each sensor are independent, the errors are calculated to be 22.2%, 22.4% and 19.7% for F-13, F-14 and for TMI, respectively. The non-systematic error for the TMI is smaller than that for either F-13 or F-14 SSM/I at the low rain rates but is comparable at rain rates higher than about 5 mm/day. The TRMM objective of 1 mm/day for non-systematic error is met by TMI for rain rates up to 5-6 mm/day. For higher rain rates, the nonsystematic error is in the 15% range. The goal of a 10% error for high rain rates may be realized by a combination of sensor measurements from multiple satellites, such as that advocated by the Global Precipitation Mission (GPM).
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Improved Soundings and Error Estimates using AIRS/AMSU Data
NASA Technical Reports Server (NTRS)
Susskind, Joel
2006-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1 K, and layer precipitable water with an rms error of 20 percent, in cases with up to 80 percent effective cloud cover. The basic theory used to analyze AIRS/AMSU/HSB data in the presence of clouds, called the at-launch algorithm, and a post-launch algorithm which differed only in the minor details from the at-launch algorithm, have been described previously. The post-launch algorithm, referred to as AIRS Version 4.0, has been used by the Goddard DAAC to analyze and distribute AIRS retrieval products. In this paper we show progress made toward the AIRS Version 5.0 algorithm which will be used by the Goddard DAAC starting late in 2006. A new methodology has been developed to provide accurate case by case error estimates for retrieved geophysical parameters and for the channel by channel cloud cleared radiances used to derive the geophysical parameters from the AIRS/AMSU observations. These error estimates are in turn used for quality control of the derived geophysical parameters and clear column radiances. Improvements made to the retrieval algorithm since Version 4.0 are described as well as results comparing Version 5.0 retrieval accuracy and spatial coverage with those obtained using Version 4.0.
Error Estimation of An Ensemble Statistical Seasonal Precipitation Prediction Model
NASA Technical Reports Server (NTRS)
Shen, Samuel S. P.; Lau, William K. M.; Kim, Kyu-Myong; Li, Gui-Long
2001-01-01
This NASA Technical Memorandum describes an optimal ensemble canonical correlation forecasting model for seasonal precipitation. Each individual forecast is based on the canonical correlation analysis (CCA) in the spectral spaces whose bases are empirical orthogonal functions (EOF). The optimal weights in the ensemble forecasting crucially depend on the mean square error of each individual forecast. An estimate of the mean square error of a CCA prediction is made also using the spectral method. The error is decomposed onto EOFs of the predictand and decreases linearly according to the correlation between the predictor and predictand. Since new CCA scheme is derived for continuous fields of predictor and predictand, an area-factor is automatically included. Thus our model is an improvement of the spectral CCA scheme of Barnett and Preisendorfer. The improvements include (1) the use of area-factor, (2) the estimation of prediction error, and (3) the optimal ensemble of multiple forecasts. The new CCA model is applied to the seasonal forecasting of the United States (US) precipitation field. The predictor is the sea surface temperature (SST). The US Climate Prediction Center's reconstructed SST is used as the predictor's historical data. The US National Center for Environmental Prediction's optimally interpolated precipitation (1951-2000) is used as the predictand's historical data. Our forecast experiments show that the new ensemble canonical correlation scheme renders a reasonable forecasting skill. For example, when using September-October-November SST to predict the next season December-January-February precipitation, the spatial pattern correlation between the observed and predicted are positive in 46 years among the 50 years of experiments. The positive correlations are close to or greater than 0.4 in 29 years, which indicates excellent performance of the forecasting model. The forecasting skill can be further enhanced when several predictors are used.
Effects of measurement error on estimating biological half-life
Caudill, S.P.; Pirkle, J.L.; Michalek, J.E. )
1992-10-01
Direct computation of the observed biological half-life of a toxic compound in a person can lead to an undefined estimate when subsequent concentration measurements are greater than or equal to previous measurements. The likelihood of such an occurrence depends upon the length of time between measurements and the variance (intra-subject biological and inter-sample analytical) associated with the measurements. If the compound is lipophilic the subject's percentage of body fat at the times of measurement can also affect this likelihood. We present formulas for computing a model-predicted half-life estimate and its variance; and we derive expressions for the effect of sample size, measurement error, time between measurements, and any relevant covariates on the variability in model-predicted half-life estimates. We also use statistical modeling to estimate the probability of obtaining an undefined half-life estimate and to compute the expected number of undefined half-life estimates for a sample from a study population. Finally, we illustrate our methods using data from a study of 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) exposure among 36 members of Operation Ranch Hand, the Air Force unit responsible for the aerial spraying of Agent Orange in Vietnam.
Verification of unfold error estimates in the UFO code
Fehl, D.L.; Biggs, F.
1996-07-01
Spectral unfolding is an inverse mathematical operation which attempts to obtain spectral source information from a set of tabulated response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the UFO (UnFold Operator) code. In addition to an unfolded spectrum, UFO also estimates the unfold uncertainty (error) induced by running the code in a Monte Carlo fashion with prescribed data distributions (Gaussian deviates). In the problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5% (standard deviation). 100 random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95% confidence level). A possible 10% bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetemined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-Pinch and ion-beam driven hohlraums.
Estimation of sequencing error rates in short reads
2012-01-01
Background Short-read data from next-generation sequencing technologies are now being generated across a range of research projects. The fidelity of this data can be affected by several factors and it is important to have simple and reliable approaches for monitoring it at the level of individual experiments. Results We developed a fast, scalable and accurate approach to estimating error rates in short reads, which has the added advantage of not requiring a reference genome. We build on the fundamental observation that there is a linear relationship between the copy number for a given read and the number of erroneous reads that differ from the read of interest by one or two bases. The slope of this relationship can be transformed to give an estimate of the error rate, both by read and by position. We present simulation studies as well as analyses of real data sets illustrating the precision and accuracy of this method, and we show that it is more accurate than alternatives that count the difference between the sample of interest and a reference genome. We show how this methodology led to the detection of mutations in the genome of the PhiX strain used for calibration of Illumina data. The proposed method is implemented in an R package, which can be downloaded from http://bcb.dfci.harvard.edu/?vwang/shadowRegression.html. Conclusions The proposed method can be used to monitor the quality of sequencing pipelines at the level of individual experiments without the use of reference genomes. Furthermore, having an estimate of the error rates gives one the opportunity to improve analyses and inferences in many applications of next-generation sequencing data. PMID:22846331
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monteâ€¦
TECHNICAL DESIGN NOTE: Elimination of systematic errors in two-mode laser telemetry
NASA Astrophysics Data System (ADS)
Courde, C.; Lintz, M.; Brillet, A.
2009-12-01
We present a simple two-mode telemetry procedure which eliminates cyclic errors, to allow accurate absolute distance measurements. We show that phase drifts and cyclic errors are suppressed using a fast polarization switch that exchanges the roles of the reference and measurement paths. Preliminary measurements obtained using this novel design show a measurement stability better than 1 Âµm. Sources of residual noise and systematic errors are identified, and we expect that an improved but still simple version of the apparatus will allow accuracies in the nanometre range for absolute measurements of kilometre-scale distances.
Richardson Extrapolation Based Error Estimation for Stochastic Kinetic Plasma Simulations
NASA Astrophysics Data System (ADS)
Cartwright, Keigh
2014-10-01
To have a high degree of confidence in simulations one needs code verification, validation, solution verification and uncertainty qualification. This talk will focus on numerical error estimation for stochastic kinetic plasma simulations using the Particle-In-Cell (PIC) method and how it impacts the code verification and validation. A technique Is developed to determine the full converged solution with error bounds from the stochastic output of a Particle-In-Cell code with multiple convergence parameters (e.g. ?t, ?x, and macro particle weight). The core of this method is a multi parameter regression based on a second-order error convergence model with arbitrary convergence rates. Stochastic uncertainties in the data set are propagated through the model usin gstandard bootstrapping on a redundant data sets, while a suite of nine regression models introduces uncertainties in the fitting process. These techniques are demonstrated on Flasov-Poisson Child-Langmuir diode, relaxation of an electro distribution to a Maxwellian due to collisions and undriven sheaths and pre-sheaths. Sandia National Laboratories is a multie-program laboratory managed and operated by Sandia Corporation, a wholly owned subisidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under Contract DE-AC04-94AL85000.
NASA Astrophysics Data System (ADS)
Martin, Peter R.; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2015-03-01
Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsy still has a substantial false negative rate. Therefore, we propose optimization of biopsy targeting to meet the clinician's desired tumor sampling probability, optimizing needle targets within each tumor and accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As a step toward this optimization, we obtained multiparametric MRI (mpMRI) and 3D TRUS images from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D surfaces that were registered to 3D TRUS. We estimated the probability, P, of obtaining a tumor sample with a single biopsy, and investigated the effects of systematic errors and anisotropy on P. Our experiments indicated that a biopsy system's lateral and elevational errors have a much greater effect on sampling probabilities, relative to its axial error. We have also determined that for a system with RMS error of 3.5 mm, tumors of volume 1.9 cm3 and smaller may require more than one biopsy core to ensure 95% probability of a sample with 50% core involvement, and tumors 1.0 cm3 and smaller may require more than two cores.
Real-Time Parameter Estimation Using Output Error
NASA Technical Reports Server (NTRS)
Grauer, Jared A.
2014-01-01
Output-error parameter estimation, normally a post- ight batch technique, was applied to real-time dynamic modeling problems. Variations on the traditional algorithm were investigated with the goal of making the method suitable for operation in real time. Im- plementation recommendations are given that are dependent on the modeling problem of interest. Application to ight test data showed that accurate parameter estimates and un- certainties for the short-period dynamics model were available every 2 s using time domain data, or every 3 s using frequency domain data. The data compatibility problem was also solved in real time, providing corrected sensor measurements every 4 s. If uncertainty corrections for colored residuals are omitted, this rate can be increased to every 0.5 s.
Analysis of systematic errors of the ASM/RXTE monitor and GT-48 ?-ray telescope
NASA Astrophysics Data System (ADS)
Fidelis, V. V.
2011-06-01
The observational data concerning variations of light curves of supernovae remnants—the Crab Nebula, Cassiopeia A, Tycho Brahe, and pulsar Vela—over 14 days scale that may be attributed to systematic errors of the ASM/RXTE monitor are presented. The experimental systematic errors of the GT-48 ?-ray telescope in the mono mode of operation were also determined. For this the observational data of TeV J2032 + 4130 (Cyg ?-2, according to the Crimean version) were used and the stationary nature of its ?-ray emission was confirmed by long-term observations performed with HEGRA and MAGIC. The results of research allow us to draw the following conclusions: (1) light curves of supernovae remnants averaged for long observing periods have false statistically significant flux variations, (2) the level of systematic errors is proportional to the registered flux and decreases with increasing temporal scale of averaging, (3) the light curves of sources may be modulated by the year period, and (4) the systematic errors of the GT-48 ?-ray telescope, in the amount caused by observations in the mono mode and data processing with the stereo-algorithm come to 0.12 min-1.
NASA Astrophysics Data System (ADS)
Archibald, Rick
2013-10-01
We have develop a fast method that can give high order error estimates of piecewise smooth functions in high dimensions with high order and low computational cost. This method can be used polynomial annihilation to estimate the smoothness of local regions of arbitrary samples in annihilation stochastic simulations. We compare the error estimation of this method to gaussian process error estimation techniques.
Dede, Adam J.O.; Squire, Larry R.; Wixted, John T.
2014-01-01
For more than a decade, the high threshold dual process (HTDP) model has served as a guide for studying the functional neuroanatomy of recognition memory. The HTDP model's utility has been that it provides quantitative estimates of recollection and familiarity, two processes thought to support recognition ability. Important support for the model has been the observation that it fits experimental data well. The continuous dual process (CDP) model also fits experimental data well. However, this model does not provide quantitative estimates of recollection and familiarity, making it less immediately useful for illuminating the functional neuroanatomy of recognition memory. These two models are incompatible and cannot both be correct, and an alternative method of model comparison is needed. We tested for systematic errors in each model's ability to fit recognition memory data from four independent data sets from three different laboratories. Across participants and across data sets, the HTDP model (but not the CDP model) exhibited systematic error. In addition, the pattern of errors exhibited by the HTDP model was predicted by the CDP model. The findings were the same at both the group and individual levels of analysis. We conclude that the CDP model provides a better account of recognition memory than the HTDP model. PMID:24184486
Models and error analyses in urban air quality estimation
NASA Technical Reports Server (NTRS)
Englar, T., Jr.; Diamante, J. M.; Jazwinski, A. H.
1976-01-01
Estimation theory has been applied to a wide range of aerospace problems. Application of this expertise outside the aerospace field has been extremely limited, however. This paper describes the use of covariance error analysis techniques in evaluating the accuracy of pollution estimates obtained from a variety of concentration measuring devices. It is shown how existing software developed for aerospace applications can be applied to the estimation of pollution through the processing of measurement types involving a range of spatial and temporal responses. The modeling of pollutant concentration by meandering Gaussian plumes is described in some detail. Time averaged measurements are associated with a model of the average plume, using some of the same state parameters and thus avoiding the problem of state memory. The covariance analysis has been implemented using existing batch estimation software. This usually involves problems in handling dynamic noise; however, the white dynamic noise has been replaced by a band-limited process which can be easily accommodated by the software.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Practical Aspects of the Equation-Error Method for Aircraft Parameter Estimation
NASA Technical Reports Server (NTRS)
Morelli, Eugene a.
2006-01-01
Various practical aspects of the equation-error approach to aircraft parameter estimation were examined. The analysis was based on simulated flight data from an F-16 nonlinear simulation, with realistic noise sequences added to the computed aircraft responses. This approach exposes issues related to the parameter estimation techniques and results, because the true parameter values are known for simulation data. The issues studied include differentiating noisy time series, maximum likelihood parameter estimation, biases in equation-error parameter estimates, accurate computation of estimated parameter error bounds, comparisons of equation-error parameter estimates with output-error parameter estimates, analyzing data from multiple maneuvers, data collinearity, and frequency-domain methods.
Variance estimation for systematic designs in spatial surveys.
Fewster, R M
2011-12-01
In spatial surveys for estimating the density of objects in a survey region, systematic designs will generally yield lower variance than random designs. However, estimating the systematic variance is well known to be a difficult problem. Existing methods tend to overestimate the variance, so although the variance is genuinely reduced, it is over-reported, and the gain from the more efficient design is lost. The current approaches to estimating a systematic variance for spatial surveys are to approximate the systematic design by a random design, or approximate it by a stratified design. Previous work has shown that approximation by a random design can perform very poorly, while approximation by a stratified design is an improvement but can still be severely biased in some situations. We develop a new estimator based on modeling the encounter process over space. The new "striplet" estimator has negligible bias and excellent precision in a wide range of simulation scenarios, including strip-sampling, distance-sampling, and quadrat-sampling surveys, and including populations that are highly trended or have strong aggregation of objects. We apply the new estimator to survey data for the spotted hyena?(Crocuta crocuta) in the Serengeti National Park, Tanzania, and find that the reported coefficient of variation for estimated density is 20% using approximation by a random design, 17% using approximation by a stratified design, and 11% using the new striplet estimator. This large reduction in reported variance is verified by simulation. PMID:21534940
Zhu, Fangqiang; Hummer, Gerhard
2012-02-01
The weighted histogram analysis method (WHAM) has become the standard technique for the analysis of umbrella sampling simulations. In this article, we address the challenges (1) of obtaining fast and accurate solutions of the coupled nonlinear WHAM equations, (2) of quantifying the statistical errors of the resulting free energies, (3) of diagnosing possible systematic errors, and (4) of optimally allocating of the computational resources. Traditionally, the WHAM equations are solved by a fixed-point direct iteration method, despite poor convergence and possible numerical inaccuracies in the solutions. Here, we instead solve the mathematically equivalent problem of maximizing a target likelihood function, by using superlinear numerical optimization algorithms with a significantly faster convergence rate. To estimate the statistical errors in one-dimensional free energy profiles obtained from WHAM, we note that for densely spaced umbrella windows with harmonic biasing potentials, the WHAM free energy profile can be approximated by a coarse-grained free energy obtained by integrating the mean restraining forces. The statistical errors of the coarse-grained free energies can be estimated straightforwardly and then used for the WHAM results. A generalization to multidimensional WHAM is described. We also propose two simple statistical criteria to test the consistency between the histograms of adjacent umbrella windows, which help identify inadequate sampling and hysteresis in the degrees of freedom orthogonal to the reaction coordinate. Together, the estimates of the statistical errors and the diagnostics of inconsistencies in the potentials of mean force provide a basis for the efficient allocation of computational resources in free energy simulations. PMID:22109354
Error estimation for CFD aeroheating prediction under rarefied flow condition
NASA Astrophysics Data System (ADS)
Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian
2014-12-01
Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ? is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ?, compared with two other parameters, Kn? and Ma?Kn?.
Fix, MK; Volken, W; Frei, D; Terribilini, D; Dal Pra, A; Schmuecking, M; Manser, P
2014-06-15
Purpose: Treatment plan evaluations in radiotherapy are currently ignoring the dosimetric impact of setup uncertainties. The determination of the robustness for systematic errors is rather computational intensive. This work investigates interpolation schemes to quantify the robustness of treatment plans for systematic errors in terms of efficiency and accuracy. Methods: The impact of systematic errors on dose distributions for patient treatment plans is determined by using the Swiss Monte Carlo Plan (SMCP). Errors in all translational directions are considered, ranging from ?3 to +3 mm in mm steps. For each systematic error a full MC dose calculation is performed leading to 343 dose calculations, used as benchmarks. The interpolation uses only a subset of the 343 calculations, namely 9, 15 or 27, and determines all dose distributions by trilinear interpolation. This procedure is applied for a prostate and a head and neck case using Volumetric Modulated Arc Therapy with 2 arcs. The relative differences of the dose volume histograms (DVHs) of the target and the organs at risks are compared. Finally, the interpolation schemes are used to compare robustness of 4- versus 2-arcs in the head and neck treatment plan. Results: Relative local differences of the DVHs increase for decreasing number of dose calculations used in the interpolation. The mean deviations are <1%, 3.5% and 6.5% for a subset of 27, 15 and 9 used dose calculations, respectively. Thereby the dose computation times are reduced by factors of 13, 25 and 43, respectively. The comparison of the 4- versus 2-arcs plan shows a decrease in robustness; however, this is outweighed by the dosimetric improvements. Conclusion: The results of this study suggest that the use of trilinear interpolation to determine the robustness of treatment plans can remarkably reduce the number of dose calculations. This work was supported by Varian Medical Systems. This work was supported by Varian Medical Systems.
On the correspondence between short- and long-timescale systematic errors in the TAMIP and AMIP
NASA Astrophysics Data System (ADS)
Ma, H.; Xie, S.; Boyle, J. S.; Klein, S. A.
2012-12-01
The correspondence between short- and long-term systematic errors in climate models from the transpose-AMIP (TAMIP, short-term hindcasts) and AMIP (long-term free running) archives is systematically examined with a focus on the precipitation, clouds and radiation. The data from TAMIP is based on 16 5-day hindcast ensembles from the tamip200907 experiment during YOTC, and the data from AMIP is based on the July-August mean of 1979-2008. Our results suggest that most systematic errors apparent in the long-term climate runs, particularly those associated with moist processes, also appear in the hindcasts in all the climate models (CAM4, CAM5, CNRM5, HadGEN2-A, IPSL, and MIROC5). The errors, especially in CAM4/5, and MIROC5, grow with the hindcast lead time and typically saturate after few days of hindcasts with amplitudes comparable to the climate errors. Examples are excessive precipitation in much of the tropics and overestimate of net shortwave absorbed radiation in the stratocumulus cloud decks over the eastern subtropical ocean and the Southern Ocean at about 60°S. This suggests that these systematic errors likely resulted from model parameterizations since large-scale flows remain close to observations in the first few days of the hindcasts. We will also discuss possible issues of initial spin-up and ensemble members for hindcast experiments in this presentation. (This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.)
NASA Astrophysics Data System (ADS)
Birk, Manfred; Wagner, Georg
2016-02-01
The Voigt profile commonly used in radiative transfer modeling of Earth's and planets' atmospheres for remote sensing/climate modeling produces systematic errors so far not accounted for. Saturated lines are systematically too narrow when calculated from pressure broadening parameters based on the analysis of laboratory data with the Voigt profile. This is caused by line narrowing effects leading to systematically too small fitted broadening parameters when applying the Voigt profile. These effective values are still valid to model non-saturated lines with sufficient accuracy. Saturated lines dominated by the wings of the line profile are sufficiently accurately modeled with a Voigt profile with the correct broadening parameters and are thus systematically too narrow when calculated with the effective values. The systematic error was quantified by mid infrared laboratory spectroscopy of the water Î½2 fundamental. Correct Voigt profile based pressure broadening parameters for saturated lines were 3-4% larger than the effective ones in the spectroscopic database. Impacts on remote sensing and climate modeling are expected. Combination of saturated and non-saturated lines in the spectroscopic analysis will quantify line narrowing with unprecedented precision.
On GPS Water Vapour estimation and related errors
NASA Astrophysics Data System (ADS)
Antonini, Andrea; Ortolani, Alberto; Rovai, Luca; Benedetti, Riccardo; Melani, Samantha
2010-05-01
Water vapour (WV) is one of the most important constituents of the atmosphere: it plays a crucial role in the earth's radiation budget in the absorption processes both of the incoming shortwave and the outgoing longwave radiation; it is one of the main greenhouse gases of the atmosphere, by far the one with higher concentration. In addition moisture and latent heat are transported through the WV phase, which is one of the driving factor of the weather dynamics, feeding the cloud systems evolution. An accurate, dense and frequent sampling of WV at different scales, is consequently of great importance for climatology and meteorology research as well as operational weather forecasting. Since the development of the satellite positioning systems, it has been clear that the troposphere and its WV content were a source of delay in the positioning signal, in other words a source of error in the positioning process or in turn a source of information in meteorology. The use of the GPS (Global Positioning System) signal for WV estimation has increased in recent years, starting from measurements collected from a ground-fixed dual frequency GPS geodetic station. This technique for processing the GPS data is based on measuring the signal travel time in the satellite-receiver path and then processing such signal to filter out all delay contributions except the tropospheric one. Once the troposheric delay is computed, the wet and dry part are decoupled under some hypotheses on the tropospheric structure and/or through ancillary information on pressure and temperature. The processing chain normally aims at producing a vertical Integrated Water Vapour (IWV) value. The other non troposheric delays are due to ionospheric free electrons, relativistic effects, multipath effects, transmitter and receiver instrumental biases, signal bending. The total effect is a delay in the signal travel time with respect to the geometrical straight path. The GPS signal has the advantage to be nearly costless and practically continuous (every second) with respect to the atmospheric dynamics. The spatial resolution is correlated to the number and spatial distance (i.e. density) of ground fixed stations and in principle can be very high (for sure it is increasing). The problem can reside in the errors made in the decoupling of the various delay components and in the approximation assumed for the computation of the IWV from the wet delay component. Such errors often are "masked" by the use of the available software packages for GPS data processing and, as a consequence, it is easier to find, associated to the final WV products, errors given from a posteriori validation processes rather than derived from rigorous error propagation analyses. In this work we want to present a technique to compute the different components necessary to retrieve WV measurements from the GPS signal, with a critical analysis of all approximations and errors made in the processing procedure also in perspectives of the great opportunity that the European GALILEO system will bring in this field too.
Model Error Estimation for the CPTEC Eta Model
NASA Technical Reports Server (NTRS)
Tippett, Michael K.; daSilva, Arlindo
1999-01-01
Statistical data assimilation systems require the specification of forecast and observation error statistics. Forecast error is due to model imperfections and differences between the initial condition and the actual state of the atmosphere. Practical four-dimensional variational (4D-Var) methods try to fit the forecast state to the observations and assume that the model error is negligible. Here with a number of simplifying assumption, a framework is developed for isolating the model error given the forecast error at two lead-times. Two definitions are proposed for the Talagrand ratio tau, the fraction of the forecast error due to model error rather than initial condition error. Data from the CPTEC Eta Model running operationally over South America are used to calculate forecast error statistics and lower bounds for tau.
Adaptive error covariances estimation methods for ensemble Kalman filters
NASA Astrophysics Data System (ADS)
Zhen, Yicun; Harlim, John
2015-08-01
This paper presents a computationally fast algorithm for estimating, both, the system and observation noise covariances of nonlinear dynamics, that can be used in an ensemble Kalman filtering framework. The new method is a modification of Belanger's recursive method, to avoid an expensive computational cost in inverting error covariance matrices of product of innovation processes of different lags when the number of observations becomes large. When we use only product of innovation processes up to one-lag, the computational cost is indeed comparable to a recently proposed method by Berry-Sauer's. However, our method is more flexible since it allows for using information from product of innovation processes of more than one-lag. Extensive numerical comparisons between the proposed method and both the original Belanger's and Berry-Sauer's schemes are shown in various examples, ranging from low-dimensional linear and nonlinear systems of SDEs and 40-dimensional stochastically forced Lorenz-96 model. Our numerical results suggest that the proposed scheme is as accurate as the original Belanger's scheme on low-dimensional problems and has a wider range of more accurate estimates compared to Berry-Sauer's method on L-96 example.
Detecting Positioning Errors and Estimating Correct Positions by Moving Window.
Song, Ha Yoon; Lee, Jun Seok
2015-01-01
In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282
Detecting Positioning Errors and Estimating Correct Positions by Moving Window
Song, Ha Yoon; Lee, Jun Seok
2015-01-01
In recent times, improvements in smart mobile devices have led to new functionalities related to their embedded positioning abilities. Many related applications that use positioning data have been introduced and are widely being used. However, the positioning data acquired by such devices are prone to erroneous values caused by environmental factors. In this research, a detection algorithm is implemented to detect erroneous data over a continuous positioning data set with several options. Our algorithm is based on a moving window for speed values derived by consecutive positioning data. Both the moving average of the speed and standard deviation in a moving window compose a moving significant interval at a given time, which is utilized to detect erroneous positioning data along with other parameters by checking the newly obtained speed value. In order to fulfill the designated operation, we need to examine the physical parameters and also determine the parameters for the moving windows. Along with the detection of erroneous speed data, estimations of correct positioning are presented. The proposed algorithm first estimates the speed, and then the correct positions. In addition, it removes the effect of errors on the moving window statistics in order to maintain accuracy. Experimental verifications based on our algorithm are presented in various ways. We hope that our approach can help other researchers with regard to positioning applications and human mobility research. PMID:26624282
Errors of Remapping of Radar Estimates onto Cartesian Coordinates
NASA Astrophysics Data System (ADS)
Sharif, H. O.; Ogden, F. L.
2014-12-01
Recent upgrades to operational radar rainfall products in terms of quality and resolution call for re-examination of the factors that contribute to the uncertainty of radar rainfall estimation. Remapping or gridding of radar polar observations onto Cartesian coordinates is implemented using various methods, and is often applied when radar estimates are compared against rain gauge observations, in hydrologic applications, or for merging data from different radars. However, assuming perfect radar observations, many of the widely used remapping methodologies do not conserve mass for the rainfall rate field. Research has suggested that optimal remapping should select all polar bins falling within or intersecting a Cartesian grid and assign them weights based on the proportion of each individual bin's area falling within the grid. However, to reduce computational demand practitioners use a variety of approximate remapping approaches. The most popular approximate approaches used are those based on extracting information from radar bins whose centers fall within a certain distance from the center of the Cartesian grid. This paper introduces a mass-conserving method for remapping, which we call "precise remapping", and evaluates it by comparing against two other commonly used remapping methods based on areal weighting and distance. Results show that the choice of the remapping method can lead to large errors in grid-averaged rainfall accumulations.
Kriging regression of PIV data using a local error estimate
NASA Astrophysics Data System (ADS)
de Baar, Jouke H. S.; Percin, Mustafa; Dwight, Richard P.; van Oudheusden, Bas W.; Bijl, Hester
2014-01-01
The objective of the method described in this work is to provide an improved reconstruction of an original flow field from experimental velocity data obtained with particle image velocimetry (PIV) technique, by incorporating the local accuracy of the PIV data. The postprocessing method we propose is Kriging regression using a local error estimate (Kriging LE). In Kriging LE, each velocity vector must be accompanied by an estimated measurement uncertainty. The performance of Kriging LE is first tested on synthetically generated PIV images of a two-dimensional flow of four counter-rotating vortices with various seeding and illumination conditions. Kriging LE is found to increase the accuracy of interpolation to a finer grid dramatically at severe reflection and low seeding conditions. We subsequently apply Kriging LE for spatial regression of stereo-PIV data to reconstruct the three-dimensional wake of a flapping-wing micro air vehicle. By qualitatively comparing the large-scale vortical structures, we show that Kriging LE performs better than cubic spline interpolation. By quantitatively comparing the interpolated vorticity to unused measurement data at intermediate planes, we show that Kriging LE outperforms conventional Kriging as well as cubic spline interpolation.
Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.; Polpitiya, Ashoka D.; Purvine, Samuel O.; Anderson, Gordon A.; Camp, David G.; Smith, Richard D.
2009-12-16
Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that can estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
Beckerman, M.; Oblow, E.M.
1988-04-01
A methodology has been developed for the treatment of systematic errors which arise in the processing of sparse sensor data. We present a detailed application of this methodology to the construction from wide-angle sonar sensor data of navigation maps for use in autonomous robotic navigation. In the methodology we introduce a four-valued labelling scheme and a simple logic for label combination. The four labels, conflict, occupied, empty and unknown, are used to mark the cells of the navigation maps; the logic allows for the rapid updating of these maps as new information is acquired. The systematic errors are treated by relabelling conflicting pixel assignments. Most of the new labels are obtained from analyses of the characteristic patterns of conflict which arise during the information processing. The remaining labels are determined by imposing an elementary consistent-labelling condition. 26 refs., 9 figs.
The effect of horizontal resolution on systematic errors of the GLA forecast model
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Chen, Jau-Ming; Pfaendtner, James
1990-01-01
Systematic prediction errors of the Goddard Laboratory for Atmospheres (GLA) forecast system are reduced when the higher-resolution (2 x 2.5 deg) model version is used. Based on a budget analysis of the 200-mb eddy streamfunction, the improvement of stationary eddy forecasting is seen to be caused by the following mechanism: by increasing the horizontal spatial resolution of the forecast model, atmospheric diabatic heating over the three tropical continents is changed in a way that intensifies the planetary-scale divergent circulations associated with the three pairs of divergent-convergent centers over these continents. The intensified divergent circulation results in an enhancement of vorticity sources in the Northern Hemisphere. The additional vorticity is advected eastward by a stationary wave train along 30 deg N, thereby reducing systematic errors in the lower-resolution (4 x 5 deg) GLA model.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Ehernberger, L. J.
1985-01-01
The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.
NASA Astrophysics Data System (ADS)
Acquaviva, Viviana; Raichoor, Anand; Gawiser, Eric
2015-05-01
We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties in the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the multi-dimensional probability distribution function in SED fitting + z parameter space, including all correlations. While the performance of joint SED fitting and photo-z estimation might be hindered by template incompleteness, we demonstrate that the latter is â€œflaggedâ€ by a large fraction of outliers in redshift, and that significant improvements can be achieved by using flexible stellar populations synthesis models and more realistic star formation histories. In all cases, we find that the median stellar age is better recovered than the time elapsed from the onset of star formation. Finally, we show that using a photometric redshift code such as EAZY to obtain redshift probability distributions that are then used as priors for SED fitting codes leads to only a modest bias in the SED fitting parameters and is thus a viable alternative to the simultaneous estimation of SED parameters and photometric redshifts.
The effect of systematic errors on the hybridization of optical critical dimension measurements
NASA Astrophysics Data System (ADS)
Henn, Mark-Alexander; Barnes, Bryan M.; Zhang, Nien Fan; Zhou, Hui; Silver, Richard M.
2015-06-01
In hybrid metrology two or more measurements of the same measurand are combined to provide a more reliable result that ideally incorporates the individual strengths of each of the measurement methods. While these multiple measurements may come from dissimilar metrology methods such as optical critical dimension microscopy (OCD) and scanning electron microscopy (SEM), we investigated the hybridization of similar OCD methods featuring a focus-resolved simulation study of systematic errors performed at orthogonal polarizations. Specifically, errors due to line edge and line width roughness (LER, LWR) and their superposition (LEWR) are known to contribute a systematic bias with inherent correlated errors. In order to investigate the sensitivity of the measurement to LEWR, we follow a modeling approach proposed by Kato et al. who studied the effect of LEWR on extreme ultraviolet (EUV) and deep ultraviolet (DUV) scatterometry. Similar to their findings, we have observed that LEWR leads to a systematic bias in the simulated data. Since the critical dimensions (CDs) are determined by fitting the respective model data to the measurement data by minimizing the difference measure or chi square function, a proper description of the systematic bias is crucial to obtaining reliable results and to successful hybridization. In scatterometry, an analytical expression for the influence of LEWR on the measured orders can be derived, and accounting for this effect leads to a modification of the model function that not only depends on the critical dimensions but also on the magnitude of the roughness. For finite arrayed structures however, such an analytical expression cannot be derived. We demonstrate how to account for the systematic bias and that, if certain conditions are met, a significant improvement of the reliability of hybrid metrology for combining both dissimilar and similar measurement tools can be achieved.
Landrigan, Matthew D; Roeder, Ryan K
2009-06-19
Accumulation of fatigue microdamage in cortical bone specimens is commonly measured by a modulus or stiffness degradation after normalizing tissue heterogeneity by the initial modulus or stiffness of each specimen measured during a preloading step. In the first experiment, the initial specimen modulus defined using linear elastic beam theory (LEBT) was shown to be nonlinearly dependent on the preload level, which subsequently caused systematic error in the amount and rate of damage accumulation measured by the LEBT modulus degradation. Therefore, the secant modulus is recommended for measurements of the initial specimen modulus during preloading. In the second experiment, different measures of mechanical degradation were directly compared and shown to result in widely varying estimates of damage accumulation during fatigue. After loading to 400,000 cycles, the normalized LEBT modulus decreased by 26% and the creep strain ratio decreased by 58%, but the normalized secant modulus experienced no degradation and histology revealed no significant differences in microcrack density. The LEBT modulus was shown to include the combined effect of both elastic (recovered) and creep (accumulated) strain. Therefore, at minimum, both the secant modulus and creep should be measured throughout a test to most accurately indicate damage accumulation and account for different damage mechanisms. Histology revealed indentation of tissue adjacent to roller supports, with significant sub-surface damage beneath large indentations, accounting for 22% of the creep strain on average. The indentation of roller supports resulted in inflated measures of the LEBT modulus degradation and creep. The results of this study suggest that investigations of fatigue microdamage in cortical bone should avoid the use of four-point bending unless no other option is possible. PMID:19394019
NASA Technical Reports Server (NTRS)
Mulrooney, Dr. Mark K.; Matney, Dr. Mark J.
2007-01-01
Orbital object data acquired via optical telescopes can play a crucial role in accurately defining the space environment. Radar systems probe the characteristics of small debris by measuring the reflected electromagnetic energy from an object of the same order of size as the wavelength of the radiation. This signal is affected by electrical conductivity of the bulk of the debris object, as well as its shape and orientation. Optical measurements use reflected solar radiation with wavelengths much smaller than the size of the objects. Just as with radar, the shape and orientation of an object are important, but we only need to consider the surface electrical properties of the debris material (i.e., the surface albedo), not the bulk electromagnetic properties. As a result, these two methods are complementary in that they measure somewhat independent physical properties to estimate the same thing, debris size. Short arc optical observations such as are typical of NASA's Liquid Mirror Telescope (LMT) give enough information to estimate an Assumed Circular Orbit (ACO) and an associated range. This information, combined with the apparent magnitude, can be used to estimate an "absolute" brightness (scaled to a fixed range and phase angle). This absolute magnitude is what is used to estimate debris size. However, the shape and surface albedo effects make the size estimates subject to systematic and random errors, such that it is impossible to ascertain the size of an individual object with any certainty. However, as has been shown with radar debris measurements, that does not preclude the ability to estimate the size distribution of a number of objects statistically. After systematic errors have been eliminated (range errors, phase function assumptions, photometry) there remains a random geometric albedo distribution that relates object size to absolute magnitude. Measurements by the LMT of a subset of tracked debris objects with sizes estimated from their radar cross sections indicate that the random variations in the albedo follow a log-normal distribution quite well. In addition, this distribution appears to be independent of object size over a considerable range in size. Note that this relation appears to hold for debris only, where the shapes and other properties are not primarily the result of human manufacture, but of random processes. With this information in hand, it now becomes possible to estimate the actual size distribution we are sampling from. We have identified two characteristics of the space debris population that make this process tractable and by extension have developed a methodology for performing the transformation.
Local and Global Views of Systematic Errors of Atmosphere-Ocean General Circulation Models
NASA Astrophysics Data System (ADS)
Mechoso, C. Roberto; Wang, Chunzai; Lee, Sang-Ki; Zhang, Liping; Wu, Lixin
2014-05-01
Coupled Atmosphere-Ocean General Circulation Models (CGCMs) have serious systematic errors that challenge the reliability of climate predictions. One major reason for such biases is the misrepresentations of physical processes, which can be amplified by feedbacks among climate components especially in the tropics. Much effort, therefore, is dedicated to the better representation of physical processes in coordination with intense process studies. The present paper starts with a presentation of these systematic CGCM errors with an emphasis on the sea surface temperature (SST) in simulations by 22 participants in the Coupled Model Intercomparison Project phase 5 (CMIP5). Different regions are considered for discussion of model errors, including the one around the equator, the one covered by the stratocumulus decks off Peru and Namibia, and the confluence between the Angola and Benguela currents. Hypotheses on the reasons for the errors are reviewed, with particular attention on the parameterization of low-level marine clouds, model difficulties in the simulation of the ocean heat budget under the stratocumulus decks, and location of strong SST gradients. Next the presentation turns to a global perspective of the errors and their causes. It is shown that a simulated weak Atlantic Meridional Overturning Circulation (AMOC) tends to be associated with cold biases in the entire Northern Hemisphere with an atmospheric pattern that resembles the Northern Hemisphere annular mode. The AMOC weakening is also associated with a strengthening of Antarctic bottom water formation and warm SST biases in the Southern Ocean. It is also shown that cold biases in the tropical North Atlantic and West African/Indian monsoon regions during the warm season in the Northern Hemisphere have interhemispheric links with warm SST biases in the tropical southeastern Pacific and Atlantic, respectively. The results suggest that improving the simulation of regional processes may not suffice for a more successful CGCM performance, as the effects of remote biases may override them. Therefore, efforts to reduce CGCM errors cannot be narrowly focused on particular regions.
A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations
Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.
1999-11-03
An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…
Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown
ERIC Educational Resources Information Center
Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi
2014-01-01
When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and theâ€¦
Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report
2016-01-01
This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.
NASA Technical Reports Server (NTRS)
Parrott, T. L.; Smith, C. D.
1977-01-01
The effect of random and systematic errors associated with the measurement of normal incidence acoustic impedance in a zero-mean-flow environment was investigated by the transmission line method. The influence of random measurement errors in the reflection coefficients and pressure minima positions was investigated by computing fractional standard deviations of the normalized impedance. Both the standard techniques of random process theory and a simplified technique were used. Over a wavelength range of 68 to 10 cm random measurement errors in the reflection coefficients and pressure minima positions could be described adequately by normal probability distributions with standard deviations of 0.001 and 0.0098 cm, respectively. An error propagation technique based on the observed concentration of the probability density functions was found to give essentially the same results but with a computation time of about 1 percent of that required for the standard technique. The results suggest that careful experimental design reduces the effect of random measurement errors to insignificant levels for moderate ranges of test specimen impedance component magnitudes. Most of the observed random scatter can be attributed to lack of control by the mounting arrangement over mechanical boundary conditions of the test sample.
Evaluating concentration estimation errors in ELISA microarray experiments
Daly, Don S.; White, Amanda M.; Varnum, Susan M.; Anderson, Kevin K.; Zangar, Richard C.
2005-01-26
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Although propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.
NASA Astrophysics Data System (ADS)
de la Torre, Sylvain; Guzzo, Luigi
2012-11-01
We investigate the ability of state-of-the-art redshift-space distortion models for the galaxy anisotropic two-point correlation function, Î¾(râŠ¥, râˆ¥), to recover precise and unbiased estimates of the linear growth rate of structure f, when applied to catalogues of galaxies characterized by a realistic bias relation. To this aim, we make use of a set of simulated catalogues at z = 0.1 and 1 with different luminosity thresholds, obtained by populating dark matter haloes from a large N-body simulation using halo occupation prescriptions. We examine the most recent developments in redshift-space distortion modelling, which account for non-linearities on both small and intermediate scales produced, respectively, by randomized motions in virialized structures and non-linear coupling between the density and velocity fields. We consider the possibility of including the linear component of galaxy bias as a free parameter and directly estimate the growth rate of structure f. Results are compared to those obtained using the standard dispersion model, over different ranges of scales. We find that the model of Taruya et al., the most sophisticated one considered in this analysis, provides in general the most unbiased estimates of the growth rate of structure, with systematic errors within Â±4 per cent over a wide range of galaxy populations spanning luminosities between L > L* and L > 3L*. The scale dependence of galaxy bias plays a role on recovering unbiased estimates of f when fitting quasi-non-linear scales. Its effect is particularly severe for most luminous galaxies, for which systematic effects in the modelling might be more difficult to mitigate and have to be further investigated. Finally, we also test the impact of neglecting the presence of non-negligible velocity bias with respect to mass in the galaxy catalogues. This can produce an additional systematic error of the order of 1-3 per cent depending on the redshift, comparable to the statistical errors the we aim at achieving with future high-precision surveys such as Euclid.
The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors
Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter
2010-07-15
Purpose: Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. Methods: 9 head and neck (H and N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets ({+-}1 mm in two banks, {+-}0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. Results: The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H and N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. Conclusions: None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.
Mao, J.; Robock, A.
1998-07-01
Thirty surface air temperature simulations for 1979--88 by 29 atmospheric general circulation models are analyzed and compared with the observations over land. These models were run as part of the Atmospheric Model Intercomparison Project (AMIP). Several simulations showed serious systematic errors, up to 4--5 C, in globally averaged land air temperature. The 16 best simulations gave rather realistic reproductions of the mean climate and seasonal cycle of global land air temperature, with an average error of {minus}0.9 C for the 10-yr period. The general coldness of the model simulations is consistent with previous intercomparison studies. The regional systematic errors showed very large cold biases in areas with topography and permanent ice, which implies a common deficiency in the representation of snow-ice albedo in the diverse models. The SST and sea ice specification of climatology rather than observations at high latitudes for the first three years (1979--81) caused a noticeable drift in the neighboring land air temperature simulations, compared to the rest of the years (1982--88). Unsuccessful simulation of the extreme warm (1981) and cold (1984--85) periods implies that some variations are chaotic or unpredictable, produced by internal atmospheric dynamics and not forced by global SST patterns.
Castle, R.O.; Brown, B.W., Jr.; Gilmore, T.D.; Mark, R.K.; Wilson, R.C.
1983-01-01
Appraisals of the two levelings that formed the southern California field test for the accumulation of the atmospheric refraction error indicate that random error and systematic error unrelated to refraction competed with the systematic refraction error and severely complicate any analysis of the test results. If the fewer than one-third of the sections that met less than second-order, class I standards are dropped, the divergence virtually disappears between the presumably more refraction contaminated long-sight-length survey and the less contaminated short-sight-length survey. -Authors
NASA Astrophysics Data System (ADS)
Smith, Jeffrey C.; Stumpe, M. C.; Van Cleve, J.; Jenkins, J. M.; Barclay, T. S.; Fanelli, M. N.; Girouard, F.; Kolodziejczak, J.; McCauliff, S.; Morris, R. L.; Twicken, J. D.
2012-05-01
We present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data where a subset of highly correlated and quiet stars is used to generate a cotrending basis vector set which is, in turn, used to establish a range of "reasonable" robust fit parameters. These robust fit parameters are then used to generate a "Bayesian Prior" and a "Bayesian Posterior" PDF (Probability Distribution Function). When maximized, the posterior PDF finds the best fit that simultaneously removes systematic effects while reducing the signal distortion and noise injection which commonly afflicts simple Least Squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves versus an analytical approach, which uses a Gaussian fit to the Priors. Recent improvements to the algorithm are presented including entropy cleaning of basis vectors, better light curve normalization methods, application to short cadence data and a goodness metric which can be used to numerically evaluate the performance of the cotrending. The goodness metric can then be introduced into the merit function as a Lagrange multiplier and the fit iterated to improve performance. Funding for the Kepler Discovery Mission is provided by NASA's Science Mission Directorate.
NASA Astrophysics Data System (ADS)
Hacker, Joshua; Lee, Jared; Lei, Lili
2014-05-01
Numerical weather prediction (NWP) models have deficiencies in surface and boundary layer parameterizations, which may be particularly acute over complex terrain. Structural and physical model deficiencies are often poorly understood, and can be difficult to identify. Uncertain model parameters can lead to one class of model deficiencies when they are mis-specified. Augmenting the model state variables with parameters, data assimilation can be used to estimate the parameter distributions as long as the forecasts for observed variables is linearly dependent on the parameters. Reduced forecast (background) error shows that the parameter is accounting for some component of model error. Ensemble data assimilation has the favorable characteristic of providing ensemble-mean parameter estimates, eliminating some noise in the estimates when additional constraints on the error dynamics are unknown. This study focuses on coupling the Weather Research and Forecasting (WRF) NWP model with the Data Assimilation Research Testbed (DART) to estimate the Zilitinkevich parameter (CZIL). CZIL controls the thermal 'roughness length' for a given momentum roughness, thereby controlling heat and moisture fluxes through the surface layer by specifying the (unobservable) aerodynamic surface temperature. Month-long data assimilation experiments with 96 ensemble members, and grid spacing down to 3.3 km, provide a data set for interpreting parametric model errors in complex terrain. Experiments are during fall 2012 over the western U.S., and radiosonde, aircraft, satellite wind, surface, and mesonet observations are assimilated every 3 hours. One ensemble has a globally constant value of CZIL=0.1 (the WRF default value), while a second ensemble allows CZIL to vary over the range [0.01, 0.99], with distributions updated via the assimilation. Results show that the CZIL estimates do vary in time and space. Most often, forecasts are more skillful with the updated parameter values, compared to the fixed default values, suggesting that the parameters account for some systematic errors. Because the parameters can account for multiple sources of errors, the importance of terrain in determining surface-layer errors can be deduced from parameter estimates in complex terrain; parameter estimates with spatial scales similar to the terrain indicate that terrain is responsible for surface-layer model errors. We will also comment on whether residual errors in the state estimates and predictions appear to suggest further parametric model error, or some other source of error that may arise from incorrect similarity functions in the surface-layer schemes.
Mitigating systematic errors in angular correlation function measurements from wide field surveys
NASA Astrophysics Data System (ADS)
Morrison, C. B.; Hildebrandt, H.
2015-12-01
We present an investigation into the effects of survey systematics such as varying depth, point spread function size, and extinction on the galaxy selection and correlation in photometric, multi-epoch, wide area surveys. We take the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) as an example. Variations in galaxy selection due to systematics are found to cause density fluctuations of up to 10 per cent for some small fraction of the area for most galaxy redshift slices and as much as 50 per cent for some extreme cases of faint high-redshift samples. This results in correlations of galaxies against survey systematics of order ˜1 per cent when averaged over the survey area. We present an empirical method for mitigating these systematic correlations from measurements of angular correlation functions using weighted random points. These weighted random catalogues are estimated from the observed galaxy overdensities by mapping these to survey parameters. We are able to model and mitigate the effect of systematic correlations allowing for non-linear dependences of density on systematics. Applied to CFHTLenS, we find that the method reduces spurious correlations in the data by a factor of 2 for most galaxy samples and as much as an order of magnitude in others. Such a treatment is particularly important for an unbiased estimation of very small correlation signals, as e.g. from weak gravitational lensing magnification bias. We impose a criterion for using a galaxy sample in a magnification measurement of the majority of the systematic correlations show improvement and are less than 10 per cent of the expected magnification signal when combined in the galaxy cross-correlation. After correction the galaxy samples in CFHTLenS satisfy this criterion for zphot < 0.9 and will be used in a future analysis of magnification.
An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method
NASA Technical Reports Server (NTRS)
Frisbee, Joseph H., Jr.
2011-01-01
State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.
Systematic errors on curved microstructures caused by aberrations in confocal surface metrology.
Rahlves, Maik; Roth, Bernhard; Reithmeier, Eduard
2015-04-20
Optical aberrations of microscope lenses are known as a source of systematic errors in confocal surface metrology, which has become one of the most popular methods to measure the surface topography of microstructures. We demonstrate that these errors are not constant over the entire field of view but also depend on the local slope angle of the microstructure and lead to significant deviations between the measured and the actual surface. It is shown by means of a full vectorial high NA numerical model that a change in the slope angle alters the shape of the intensity depth response of the microscope and leads to a shift of the intensity peak of up to several hundred nanometers. Comparative experimental data are presented which support the theoretical results. Our studies allow for correction of optical aberrations and, thus, increase the accuracy in profilometric measurements. PMID:25969000
NASA Technical Reports Server (NTRS)
Flamant, Cyrille N.; Schwemmer, Geary K.; Korb, C. Laurence; Evans, Keith D.; Palm, Stephen P.
1999-01-01
Remote airborne measurements of the vertical and horizontal structure of the atmospheric pressure field in the lower troposphere are made with an oxygen differential absorption lidar (DIAL). A detailed analysis of this measurement technique is provided which includes corrections for imprecise knowledge of the detector background level, the oxygen absorption fine parameters, and variations in the laser output energy. In addition, we analyze other possible sources of systematic errors including spectral effects related to aerosol and molecular scattering interference by rotational Raman scattering and interference by isotopic oxygen fines.
NASA Astrophysics Data System (ADS)
Zellem, Robert Thomas
2015-03-01
The > 1500 confirmed exoplanets span a wide range of planetary masses ( 1 MEarth -20 MJupiter), radii ( 0.3 R Earth -2 RJupiter), semi-major axes ( 0.005-100 AU), orbital periods ( 0.3-1 x 105 days), and host star spectral types. The effects of a widely-varying parameter space on a planetary atmosphere's chemistry and dynamics can be determined through transiting exoplanet observations. An exoplanet's atmospheric signal, either in absorption or emission, is on the order of 0.1% which is dwarfed by telescope-specific systematic error sources up to 60%. This thesis explores some of the major sources of error and their removal from space- and ground-based observations, specifically Spitzer /IRAC single-object photometry, IRTF/SpeX and Palomar/TripleSpec low-resolution single-slit near-infrared spectroscopy, and Kuiper/Mont4k multi-object photometry. The errors include pointing-induced uncertainties, airmass variations, seeing-induced signal loss, telescope jitter, and system variability. They are treated with detector efficiency pixel-mapping, normalization routines, a principal component analysis, binning with the geometric mean in Fourier-space, characterization by a comparison star, repeatability, and stellar monitoring to get within a few times of the photon noise limit. As a result, these observations provide strong measurements of an exoplanet's dynamical day-to-night heat transport, constrain its CH4 abundance, investigate emission mechanisms, and develop an observing strategy with smaller telescopes. The reduction methods presented here can also be applied to other existing and future platforms to identify and remove systematic errors. Until such sources of uncertainty are characterized with bright systems with large planetary signals for platforms such as the James Webb Space Telescope, for example, one cannot resolve smaller objects with more subtle spectral features, as expected of exo-Earths.
Field evaluation of distance-estimation error during wetland-dependent bird surveys
Nadeau, Christopher P.; Conway, Courtney J.
2012-01-01
Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (xÌ„error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (xÌ„error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.
Ju, Lili; Tian, Li; Wang, Desheng
2009-01-01
In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.
NASA Technical Reports Server (NTRS)
Lang, Christapher G.; Bey, Kim S. (Technical Monitor)
2002-01-01
This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.
Gas hydrate estimation error associated with uncertainties of measurements and parameters
Lee, Myung W.; Collett, Timothy S.
2001-01-01
Downhole log measurements such as acoustic or electrical resistivity logs are often used to estimate in situ gas hydrate concentrations in sediment pore space. Estimation errors owing to uncertainties associated with downhole measurements and the parameters for estimation equations (weight in the acoustic method and Archie?s parameters in the resistivity method) are analyzed in order to assess the accuracy of estimation of gas hydrate concentration. Accurate downhole measurements are essential for accurate estimation of the gas hydrate concentrations in sediments, particularly at low gas hydrate concentrations and when using acoustic data. Estimation errors owing to measurement errors, except the slowness error, decrease as the gas hydrate concentration increases and as porosity increases. Estimation errors owing to uncertainty in the input parameters are small in the acoustic method and may be signifi cant in the resistivity method at low gas hydrate concentrations.
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2013-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan
2014-01-01
Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880
Offline parameter estimation using EnKF and maximum-likelihood error covariance estimates
NASA Astrophysics Data System (ADS)
Tandeo, Pierre; Pulido, Manuel
2013-04-01
Parameterizations of physical processes represent an important source of uncertainty in climate models. These processes are governed by physical parameters and most of them are unknown and generally manually tuned. This subjective approach is excessively time demanding and gives inefficient results due to flow dependency of the parameters and potential correlations between each other. Moreover, in case of changes in horizontal resolution or parameterization scheme, the physical parameters need to be completely re-evaluated. To overcome these limitations, recent works proposed to estimate the physical parameters objectively using filtering and inverse techniques. In this presentation, we investigate this way and propose a novel offline parameter estimation approach. More precisely, we build a nonlinear state-space model resolved into a EnKF (Ensemble Kalman Filter) framework where (i) the state of the system corresponds to the unknown physical parameters, (ii) the state evolution is driven as a Gaussian random walk, (iii) the observation operator is the physical process and (iv) observations are perturbed realizations of this physical process with a given set of physical parameters. Then, we use an iterative maximum-likelihood estimation of the error covariance matrices and the first guess or background state of the EnKF. Among the error covariance matrices, we estimate the one for the state equation (Q) and the observation equation (R) respectively to keep into account correlations between physical parameters and the flow dependency of the parameters. The proper estimation of covariances instead of arbitrarily prescribing them and estimate inflation factors ensures the convergence to the optimal physical parameters. The proposed technique is implemented and used to estimate parameters from the subgrid-scale orography scheme implemented in the ECMWF (European Centre for Medium-Range Weather Forecasts) and LMDZ (Laboratoire de Météorologie Dynamique Zoom) models. Using a twin expriment, we demonstrate that our parameter estimation technique is relevant and outperforms the results with the classical EnKF implementation. Moreover, the technique is flexible and could be used in online physical parameter estimations.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvo?ek, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
The mathematical origins of the kinetic compensation effect: 2. The effect of systematic errors.
Barrie, Patrick J
2012-01-01
The kinetic compensation effect states that there is a linear relationship between Arrhenius parameters ln A and E for a family of related processes. It is a widely observed phenomenon in many areas of science, notably heterogeneous catalysis. This paper explores mathematical, rather than physicochemical, explanations for the compensation effect in certain situations. Three different topics are covered theoretically and illustrated by examples. Firstly, the effect of systematic errors in experimental kinetic data is explored, and it is shown that these create apparent compensation effects. Secondly, analysis of kinetic data when the Arrhenius parameters depend on another parameter is examined. In the case of temperature programmed desorption (TPD) experiments when the activation energy depends on surface coverage, it is shown that a common analysis method induces a systematic error, causing an apparent compensation effect. Thirdly, the effect of analysing the temperature dependence of an overall rate of reaction, rather than a rate constant, is investigated. It is shown that this can create an apparent compensation effect, but only under some conditions. This result is illustrated by a case study for a unimolecular reaction on a catalyst surface. Overall, the work highlights the fact that, whenever a kinetic compensation effect is observed experimentally, the possibility of it having a mathematical origin should be carefully considered before any physicochemical conclusions are drawn. PMID:22080227
Improving Photometry and Stellar Signal Preservation with Pixel-Level Systematic Error Correction
NASA Technical Reports Server (NTRS)
Kolodzijczak, Jeffrey J.; Smith, Jeffrey C.; Jenkins, Jon M.
2013-01-01
The Kepler Mission has demonstrated that excellent stellar photometric performance can be achieved using apertures constructed from optimally selected CCD pixels. The clever methods used to correct for systematic errors, while very successful, still have some limitations in their ability to extract long-term trends in stellar flux. They also leave poorly correlated bias sources, such as drifting moirÃ© pattern, uncorrected. We will illustrate several approaches where applying systematic error correction algorithms to the pixel time series, rather than the co-added raw flux time series, provide significant advantages. Examples include, spatially localized determination of time varying moirÃ© pattern biases, greater sensitivity to radiation-induced pixel sensitivity drops (SPSDs), improved precision of co-trending basis vectors (CBV), and a means of distinguishing the stellar variability from co-trending terms even when they are correlated. For the last item, the approach enables physical interpretation of appropriately scaled coefficients derived in the fit of pixel time series to the CBV as linear combinations of various spatial derivatives of the pixel response function (PRF). We demonstrate that the residuals of a fit of soderived pixel coefficients to various PRF-related components can be deterministically interpreted in terms of physically meaningful quantities, such as the component of the stellar flux time series which is correlated with the CBV, as well as, relative pixel gain, proper motion and parallax. The approach also enables us to parameterize and assess the limiting factors in the uncertainties in these quantities.
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Å troner, Martin; Urban, Rudolf; DvoÅ™Ã¡Äek, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5â€“50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experimentsâ€™ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Löwe, Roland; Madsen, Henrik; Mikkelsen, Peter Steen; Rieckermann, Jörg
2015-07-01
In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences. These properties make it more suitable for off-line applications. The IND can help in diagnosing the causes of output errors and is computationally inexpensive. It produces best results on short forecast horizons that are typical for online applications.
NASA Astrophysics Data System (ADS)
Callegaro, L.; Pisani, M.; Ortolano, M.
2010-06-01
Johnson noise thermometers (JNT) measure the equilibrium electrical noise, proportional to thermodynamic temperature, of a sensing resistor. In the correlation method, the same resistor is connected to two amplifiers and a correlation of their outputs is performed, in order to reject amplifiers' noise. Such rejection is not perfect: the residual correlation gives a systematic error in the JNT reading. In order to set an upper limit, or to achieve a correction, for such error, a careful electrical modelling of the amplifiers and connections must be performed. Standard numerical simulation tools are inadequate for such modelling. In the literature, evaluations have been performed by the painstaking solving of analytical modelling. We propose an evaluation procedure for the JNT error due to residual correlations which blends analytical and numerical approaches, with the benefits of both: a rigorous and accurate circuit noise modelling, and a fast and flexible evaluation with a user-friendly commercial tool. The method is applied to a simple but very effective ultralow-noise amplifier employed in a working JNT.
An Examination of the Spatial Distribution of Carbon Dioxide and Systematic Errors
NASA Technical Reports Server (NTRS)
Coffey, Brennan; Gunson, Mike; Frankenberg, Christian; Osterman, Greg
2011-01-01
The industrial period and modern age is characterized by combustion of coal, oil, and natural gas for primary energy and transportation leading to rising levels of atmospheric of CO2. This increase, which is being carefully measured, has ramifications throughout the biological world. Through remote sensing, it is possible to measure how many molecules of CO2 lie in a defined column of air. However, other gases and particles are present in the atmosphere, such as aerosols and water, which make such measurements more complicated1. Understanding the detailed geometry and path length of the observation is vital to computing the concentration of CO2. Comparing these satellite readings with ground-truth data (TCCON) the systematic errors arising from these sources can be assessed. Once the error is understood, it can be scaled for in the retrieval algorithms to create a set of data, which is closer to the TCCON measurements1. Using this process, the algorithms are being developed to reduce bias, within.1% worldwide of the true value. At this stage, the accuracy is within 1%, but through correcting small errors contained in the algorithms, such as accounting for the scattering of sunlight, the desired accuracy can be achieved.
A Systematic Approach to Sensor Selection for Aircraft Engine Health Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2009-01-01
A systematic approach for selecting an optimal suite of sensors for on-board aircraft gas turbine engine health estimation is presented. The methodology optimally chooses the engine sensor suite and the model tuning parameter vector to minimize the Kalman filter mean squared estimation error in the engine s health parameters or other unmeasured engine outputs. This technique specifically addresses the underdetermined estimation problem where there are more unknown system health parameters representing degradation than available sensor measurements. This paper presents the theoretical estimation error equations, and describes the optimization approach that is applied to select the sensors and model tuning parameters to minimize these errors. Two different model tuning parameter vector selection approaches are evaluated: the conventional approach of selecting a subset of health parameters to serve as the tuning parameters, and an alternative approach that selects tuning parameters as a linear combination of all health parameters. Results from the application of the technique to an aircraft engine simulation are presented, and compared to those from an alternative sensor selection strategy.
Estimating Precipitation Errors Using Spaceborne Surface Soil Moisure Retrievals
Technology Transfer Automated Retrieval System (TEKTRAN)
Limitations in the availability of ground-based rain gauge data currently hamper our ability to quantify errors in global precipitation products over data-poor areas of the world. Over land, these limitations may be eased by approaches based on interpreting the degree of dynamic consistency existin...
NASA Technical Reports Server (NTRS)
Borovikov, Anna; Rienecker, Michele M.; Keppenne, Christian; Johnson, Gregory C.
2004-01-01
One of the most difficult aspects of ocean state estimation is the prescription of the model forecast error covariances. The paucity of ocean observations limits our ability to estimate the covariance structures from model-observation differences. In most practical applications, simple covariances are usually prescribed. Rarely are cross-covariances between different model variables used. Here a comparison is made between a univariate Optimal Interpolation (UOI) scheme and a multivariate OI algorithm (MvOI) in the assimilation of ocean temperature. In the UOI case only temperature is updated using a Gaussian covariance function and in the MvOI salinity, zonal and meridional velocities as well as temperature, are updated using an empirically estimated multivariate covariance matrix. Earlier studies have shown that a univariate OI has a detrimental effect on the salinity and velocity fields of the model. Apparently, in a sequential framework it is important to analyze temperature and salinity together. For the MvOI an estimation of the model error statistics is made by Monte-Carlo techniques from an ensemble of model integrations. An important advantage of using an ensemble of ocean states is that it provides a natural way to estimate cross-covariances between the fields of different physical variables constituting the model state vector, at the same time incorporating the model's dynamical and thermodynamical constraints as well as the effects of physical boundaries. Only temperature observations from the Tropical Atmosphere-Ocean array have been assimilated in this study. In order to investigate the efficacy of the multivariate scheme two data assimilation experiments are validated with a large independent set of recently published subsurface observations of salinity, zonal velocity and temperature. For reference, a third control run with no data assimilation is used to check how the data assimilation affects systematic model errors. While the performance of the UOI and MvOI is similar with respect to the temperature field, the salinity and velocity fields are greatly improved when multivariate correction is used, as evident from the analyses of the rms differences of these fields and independent observations. The MvOI assimilation is found to improve upon the control run in generating the water masses with properties close to the observed, while the UOI failed to maintain the temperature and salinity structure.
NASA Technical Reports Server (NTRS)
Hodge, W. F.; Bryant, W. H.
1975-01-01
An output error estimation algorithm was used to evaluate the effects of both static and dynamic instrumentation errors on the estimation of aircraft stability and control parameters. A Monte Carlo error analysis, using simulated cruise flight data, was performed for a high-performance military aircraft, a large commercial transport, and a small general aviation aircraft. The results indicate that unmodeled instrumentation errors can cause inaccuracies in the estimated parameters which are comparable to their nominal values. However, the corresponding perturbations to the estimated output response trajectories and characteristics equation pole locations appear to be relatively small. Control input errors and dynamic lags were found to be in the most significant of the error sources evaluated.
NASA Astrophysics Data System (ADS)
Zheng, Shi-Biao; Yang, Chui-Ping; Nori, Franco
2016-03-01
We investigate the effects of systematic errors of the control parameters on single-qubit gates based on nonadiabatic non-Abelian geometric holonomies and those relying on purely dynamical evolution. It is explicitly shown that the systematic error in the Rabi frequency of the control fields affects these two kinds of gates in different ways. In the presence of this systematic error, the transformation produced by the nonadiabatic non-Abelian geometric gate is not unitary in the computational space, and the resulting gate infidelity is larger than that with the dynamical method. Our results provide a theoretical basis for choosing a suitable method for implementing elementary quantum gates in physical systems, where the systematic noises are the dominant noise source.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that theâ€¦
ERIC Educational Resources Information Center
Kim, ChangHwan; Tamborini, Christopher R.
2012-01-01
Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…
ERIC Educational Resources Information Center
Duan, Bin; Dunlap, William P.
1997-01-01
A Monte Carlo study compared the accuracy of different estimates of the standard error of correlations corrected for restriction in range. The procedure suggested by P. Bobko and A. Rieck (1980) generated the most accurate estimates of the standard error. Aspects of accuracy are discussed. (SLD)
Phase errors estimation based on time-frequency distribution in SAR imagery
NASA Astrophysics Data System (ADS)
Zhao, Xia; Huang, Jincai
2005-10-01
Uncompensated phase errors presented in synthetic-aperture-radar (SAR) data have a disastrous effect on SAR image quality. To estimate and compensate phase errors, a new method is presented based on the time-frequency distributions of the range-compressed SAR signal. Robust phase errors estimates are obtained by utilizing range redundancies The processing results of the simulated data show the validity of the proposed method.
Li, T.S.; et al.
2016-01-01
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.
Anderson, K.K.
1994-05-01
Measurement error modeling is a statistical approach to the estimation of unknown model parameters which takes into account the measurement errors in all of the data. Approaches which ignore the measurement errors in so-called independent variables may yield inferior estimates of unknown model parameters. At the same time, experiment-wide variables (such as physical constants) are often treated as known without error, when in fact they were produced from prior experiments. Realistic assessments of the associated uncertainties in the experiment-wide variables can be utilized to improve the estimation of unknown model parameters. A maximum likelihood approach to incorporate measurements of experiment-wide variables and their associated uncertainties is presented here. An iterative algorithm is presented which yields estimates of unknown model parameters and their estimated covariance matrix. Further, the algorithm can be used to assess the sensitivity of the estimates and their estimated covariance matrix to the given experiment-wide variables and their associated uncertainties.
Chiplonkar, Shashi Ajit; Agte, Vaishali Vilas
2007-01-01
Individual cooked foods (104) and composite meals (92) were examined for agreement between nutritive value estimated by indirect analysis (E) (Indian National database of nutrient composition of raw foods, adjusted for observed moisture contents of cooked recipes), and by chemical analysis in our laboratory (M). The extent of error incurred in using food table values with moisture correction for estimating macro as well as micronutrients at food level and daily intake level was quantified. Food samples were analyzed for contents of iron, zinc, copper, beta-carotene, riboflavin, thiamine, ascorbic acid, folic acid and also for macronutrients, phytate and dietary fiber. Mean percent difference in energy content between E and M was 3.07+/-0.6%, that for protein was 5.3+/-2.0%, for fat was 2.6+/-1.8% and for carbohydrates was 5.1+/-0.9%. Mean percent difference in vitamin contents between E and M ranged from 32 (vitamin C) to 45.5% (beta-carotene content); and that for minerals between 5.6 (copper) to 19.8% (zinc). Percent E/M were computed for daily nutrient intakes of 264 apparently healthy adults. These were observed to be 108, 112, 127 and 97 for energy, protein, fat and carbohydrates respectively. Percent E/M for their intakes of copper (102) and beta-carotene (114) were closer to 100 but these were very high in the case of zinc (186), iron (202), and vitamins C (170), thiamine (190), riboflavin (181) and folic acid (165). Estimates based on food composition table values with moisture correction show macronutrients for cooked foods to be within +/- 5% whereas at daily intake levels the error increased up to 27%. The lack of good agreement in the case of several micronutrients indicated that the use of Indian food tables for micronutrient intakes would be inappropriate. PMID:17468077
An hp-adaptivity and error estimation for hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Bey, Kim S.
1995-01-01
This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.
Estimating extreme flood events - assumptions, uncertainty and error
NASA Astrophysics Data System (ADS)
Franks, S. W.; White, C. J.; Gensen, M.
2015-06-01
Hydrological extremes are amongst the most devastating forms of natural disasters both in terms of lives lost and socio-economic impacts. There is consequently an imperative to robustly estimate the frequency and magnitude of hydrological extremes. Traditionally, engineers have employed purely statistical approaches to the estimation of flood risk. For example, for an observed hydrological timeseries, each annual maximum flood is extracted and a frequency distribution is fit to these data. The fitted distribution is then extrapolated to provide an estimate of the required design risk (i.e. the 1% Annual Exceedance Probability - AEP). Such traditional approaches are overly simplistic in that risk is implicitly assumed to be static, in other words, that climatological processes are assumed to be randomly distributed in time. In this study, flood risk estimates are evaluated with regards to traditional statistical approaches as well as Pacific Decadal Oscillation (PDO)/El Niño-Southern Oscillation (ENSO) conditional estimates for a flood-prone catchment in eastern Australia. A paleo-reconstruction of pre-instrumental PDO/ENSO occurrence is then employed to estimate uncertainty associated with the estimation of the 1% AEP flood. The results indicate a significant underestimation of the uncertainty associated with extreme flood events when employing the traditional engineering estimates.
Kim, K.
1996-08-01
The accuracy of the diagnosis obtained from a nuclear power plant fault-diagnostic advisor using neural networks is addressed in this paper in order to ensure the credibility of the diagnosis. A new error estimation scheme called error estimation by series association provides a measure of the accuracy associated with the advisor`s diagnoses. This error estimation is performed by a secondary neural network that is fed both the input features for and the outputs of the advisor. The error estimation by series association outperforms previous error estimation techniques in providing more accurate confidence information with considerably reduced computational requirements. The authors demonstrate the extensive usability of their method by applying it to a complicated transient recognition problem of 33 transient scenarios. The simulated transient data at different severities consists of 25 distinct transients for the Duane Arnold Energy Center nuclear power station ranging from a main steam line break to anticipated transient without scram (ATWS) conditions. The fault-diagnostic advisor system with the secondary error prediction network is tested on the transients at various severity levels and degraded noise conditions. The results show that the error estimation scheme provides a useful measure of the validity of the advisor`s output or diagnosis with considerable reduction in computational requirements over previous error estimation schemes.
NASA Astrophysics Data System (ADS)
Lin, Q.; Neethling, S. J.; Dobson, K. J.; Courtois, L.; Lee, P. D.
2015-04-01
X-ray micro-tomography (XMT) is increasingly used for the quantitative analysis of the volumes of features within the 3D images. As with any measurement, there will be error and uncertainty associated with these measurements. In this paper a method for quantifying both the systematic and random components of this error in the measured volume is presented. The systematic error is the offset between the actual and measured volume which is consistent between different measurements and can therefore be eliminated by appropriate calibration. In XMT measurements this is often caused by an inappropriate threshold value. The random error is not associated with any systematic offset in the measured volume and could be caused, for instance, by variations in the location of the specific object relative to the voxel grid. It can be eliminated by repeated measurements. It was found that both the systematic and random components of the error are a strong function of the size of the object measured relative to the voxel size. The relative error in the volume was found to follow approximately a power law relationship with the volume of the object, but with an exponent that implied, unexpectedly, that the relative error was proportional to the radius of the object for small objects, though the exponent did imply that the relative error was approximately proportional to the surface area of the object for larger objects. In an example application involving the size of mineral grains in an ore sample, the uncertainty associated with the random error in the volume is larger than the object itself for objects smaller than about 8 voxels and is greater than 10% for any object smaller than about 260 voxels. A methodology is presented for reducing the random error by combining the results from either multiple scans of the same object or scans of multiple similar objects, with an uncertainty of less than 5% requiring 12 objects of 100 voxels or 600 objects of 4 voxels. As the systematic error in a measurement cannot be eliminated by combining the results from multiple measurements, this paper introduces a procedure for using volume standards to reduce the systematic error, especially for smaller objects where the relative error is larger.
NASA Astrophysics Data System (ADS)
Pan, M.; Zhan, W.; Fisher, C. K.; Crow, W. T.; Wood, E. F.
2014-12-01
This study extends the popular triple collocation method for error assessment from three source estimates to an arbitrary number of source estimates, i.e., to solve the multiple collocation problem. The error assessment problem is solved through Pythagorean constraints in Hilbert space, which is slightly different from the original inner product solution but easier to extend to multiple collocation cases. The Pythagorean solution is fully equivalent to the original inner product solution for the triple collocation case. The multiple collocation turns out to be an over-constrained problem and a least squared solution is presented. As the most critical assumption of uncorrelated errors will almost for sure fail in multiple collocation problems, we propose to divide the source estimates into structural categories and treat the structural and non-structural errors separately. Such error separation allows the source estimates to have their structural errors fully correlated within the same structural category, which is much more realistic than the original assumption. A new error assessment procedure is developed which performs the collocation twice, each for one type of errors, and then sums up the two types of errors. The new procedure is also fully backward compatible with the original triple collocation. Error assessment experiments are carried out for surface soil moisture data from multiple remote sensing models, land surface models, and in situ measurements.
Multiclass Bayes error estimation by a feature space sampling technique
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.
1979-01-01
A general Gaussian M-class N-feature classification problem is defined. An algorithm is developed that requires the class statistics as its only input and computes the minimum probability of error through use of a combined analytical and numerical integration over a sequence simplifying transformations of the feature space. The results are compared with those obtained by conventional techniques applied to a 2-class 4-feature discrimination problem with results previously reported and 4-class 4-feature multispectral scanner Landsat data classified by training and testing of the available data.
Goal-oriented explicit residual-type error estimates in XFEM
NASA Astrophysics Data System (ADS)
Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin
2013-08-01
A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.
Estimation of finite population parameters with auxiliary information and response error ?
González, L. M.; Singer, J. M.; Stanek, E.J.
2014-01-01
We use a finite population mixed model that accommodates response error in the survey variable of interest and auxiliary information to obtain optimal estimators of population parameters from data collected via simple random sampling. We illustrate the method with the estimation of a regression coefficient and conduct a simulation study to compare the performance of the empirical version of the proposed estimator (obtained by replacing variance components with estimates) with that of the least squares estimator usually employed in such settings. The results suggest that when the auxiliary variable distribution is skewed, the proposed estimator has a smaller mean squared error. PMID:25089123
Mapping systematic errors in helium abundance determinations using Markov Chain Monte Carlo
Aver, Erik; Olive, Keith A.; Skillman, Evan D. E-mail: olive@umn.edu
2011-03-01
Monte Carlo techniques have been used to evaluate the statistical and systematic uncertainties in the helium abundances derived from extragalactic H II regions. The helium abundance is sensitive to several physical parameters associated with the H II region. In this work, we introduce Markov Chain Monte Carlo (MCMC) methods to efficiently explore the parameter space and determine the helium abundance, the physical parameters, and the uncertainties derived from observations of metal poor nebulae. Experiments with synthetic data show that the MCMC method is superior to previous implementations (based on flux perturbation) in that it is not affected by biases due to non-physical parameter space. The MCMC analysis allows a detailed exploration of degeneracies, and, in particular, a false minimum that occurs at large values of optical depth in the He I emission lines. We demonstrate that introducing the electron temperature derived from the [O III] emission lines as a prior, in a very conservative manner, produces negligible bias and effectively eliminates the false minima occurring at large optical depth. We perform a frequentist analysis on data from several ''high quality'' systems. Likelihood plots illustrate degeneracies, asymmetries, and limits of the determination. In agreement with previous work, we find relatively large systematic errors, limiting the precision of the primordial helium abundance for currently available spectra.
X-ray optics metrology limited by random noise, instrumental drifts, and systematic errors
Yashchuk, Valeriy V.; Anderson, Erik H.; Barber, Samuel K.; Cambie, Rossana; Celestre, Richard; Conley, Raymond; Goldberg, Kenneth A.; McKinney, Wayne R.; Morrison, Gregory; Takacs, Peter Z.; Voronov, Dmitriy L.; Yuan, Sheng; Padmore, Howard A.
2010-07-09
Continuous, large-scale efforts to improve and develop third- and forth-generation synchrotron radiation light sources for unprecedented high-brightness, low emittance, and coherent x-ray beams demand diffracting and reflecting x-ray optics suitable for micro- and nano-focusing, brightness preservation, and super high resolution. One of the major impediments for development of x-ray optics with the required beamline performance comes from the inadequate present level of optical and at-wavelength metrology and insufficient integration of the metrology into the fabrication process and into beamlines. Based on our experience at the ALS Optical Metrology Laboratory, we review the experimental methods and techniques that allow us to mitigate significant optical metrology problems related to random, systematic, and drift errors with super-high-quality x-ray optics. Measurement errors below 0.2 mu rad have become routine. We present recent results from the ALS of temperature stabilized nano-focusing optics and dedicated at-wavelength metrology. The international effort to develop a next generation Optical Slope Measuring System (OSMS) to address these problems is also discussed. Finally, we analyze the remaining obstacles to further improvement of beamline x-ray optics and dedicated metrology, and highlight the ways we see to overcome the problems.
Boujraf, Saïd
2014-01-01
Diffusion weighted imaging uses the signal loss associated with the random thermal motion of water molecules in the presence of magnetic field gradients to derive a number of parameters that reflect the translational mobility of the water molecules in tissues. With a suitable experimental set-up, it is possible to calculate all the elements of the local diffusion tensor (DT) and derived parameters describing the behavior of the water molecules in each voxel. One of the emerging applications of the information obtained is an interpretation of the diffusion anisotropy in terms of the architecture of the underlying tissue. These interpretations can only be made provided the experimental data which are sufficiently accurate. However, the DT results are susceptible to two systematic error sources: On one hand, the presence of signal noise can lead to artificial divergence of the diffusivities. In contrast, the use of a simplified model for the interaction of the protons with the diffusion weighting and imaging field gradients (b matrix calculation), common in the clinical setting, also leads to deviation in the derived diffusion characteristics. In this paper, we study the importance of these two sources of error on the basis of experimental data obtained on a clinical magnetic resonance imaging system for an isotropic phantom using a state of the art single-shot echo planar imaging sequence. Our results show that optimal diffusion imaging require combining a correct calculation of the b-matrix and a sufficiently large signal to noise ratio. PMID:24761372
Sliding mode output feedback control based on tracking error observer with disturbance estimator.
Xiao, Lingfei; Zhu, Yue
2014-07-01
For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. PMID:24795033
NASA Astrophysics Data System (ADS)
Wright, M.; Ferreira, C.; Houck, M. H.
2013-12-01
The regional index-flood method of precipitation quantile estimation, which pools the records of similar gauges to increase sample size, makes the assumption of regional homogeneity. Therefore, heterogeneity in a candidate region is a major component of quantile estimation error. We propose an enumeration method for evaluating the utility of heterogeneity statistics over a small gauge network across a variety of timesteps from daily to yearly. Several heterogeneity statistics used in the literature are compared to error estimates at high non-exceedance probabilities for all possible regionalizations of twelve daily precipitation gauges in the Twin Cities region of Minnesota. The regional frequency analysis method using linear moments is employed to fit probability distributions and to estimate heterogeneity and error. Heterogeneity statistics are compared and contrasted as proxies of error, with the ultimate goal of aiding the regional frequency analyst in identifying low-error regions that least violate the homogeneity assumption.
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
NASA Astrophysics Data System (ADS)
Grove, Deborah Mary
A sensitivity analysis of scattering function estimates to errors in the measurement of physical parameters is conducted. The model used is that of a shallow water channel and the application is to the implementation of the wideband estimator-correlator. The estimator- correlator is used for the active detection of multiple moving scatterers in a complex acoustic environment. The transform domain estimator-correlator is derived using group theoretic definitions and is applicable to the Heisenberg and affine groups for either narrowband or wideband processing. The received signal incorporates a narrowband/wideband spreading function. The transform domain estimator-correlator uses a total scattering function in its estimate of the echoed signal. Most of the error occurs from incorrect scattering function information and noise. For the parameter sensitivity analysis, the total scattering function is decomposed into the group convolution of an environment and object scattering function. The error is shown to propagate as the group convolution of the two intermediate scattering functions. The environment scattering function is examined for errors in range and speed both theoretically by differentiation and analytically for multipath returns from a single discrete point scatterer. The errors due to incorrect channel depth, scatterer depth and sound speed are determined. The errors in the peak location of the range and speed for the multipath returns due to the addition of noise are analyzed. The errors in the object scattering function due to ignoring one of two discrete, uncorrelated point scatterers and scatterer positions are studied. Examples of error in the total scattering function due to error in both intermediate scattering functions are given. A discussion on ambiguity function resolution versus the detection statistic is given with an example of error in the total scattering function.
DETECTABILITY AND ERROR ESTIMATION IN ORBITAL FITS OF RESONANT EXTRASOLAR PLANETS
Giuppone, C. A.; Beauge, C.; Tadeu dos Santos, M.; Ferraz-Mello, S.; Michtchenko, T. A.
2009-07-10
We estimate the conditions for detectability of two planets in a 2/1 mean-motion resonance from radial velocity data, as a function of their masses, number of observations and the signal-to-noise ratio. Even for a data set of the order of 100 observations and standard deviations of the order of a few meters per second, we find that Jovian-size resonant planets are difficult to detect if the masses of the planets differ by a factor larger than {approx}4. This is consistent with the present population of real exosystems in the 2/1 commensurability, most of which have resonant pairs with similar minimum masses, and could indicate that many other resonant systems exist, but are currently beyond the detectability limit. Furthermore, we analyze the error distribution in masses and orbital elements of orbital fits from synthetic data sets for resonant planets in the 2/1 commensurability. For various mass ratios and number of data points we find that the eccentricity of the outer planet is systematically overestimated, although the inner planet's eccentricity suffers a much smaller effect. If the initial conditions correspond to small-amplitude oscillations around stable apsidal corotation resonances, the amplitudes estimated from the orbital fits are biased toward larger amplitudes, in accordance to results found in real resonant extrasolar systems.
Space-Time Error Representation and Estimation in Navier-Stokes Calculations
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2006-01-01
The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.
Estimated errors in retrievals of ocean parameters from SSMIS
NASA Astrophysics Data System (ADS)
Mears, Carl A.; Smith, Deborah K.; Wentz, Frank J.
2015-06-01
Measurements made by microwave imaging radiometers can be used to retrieve several environmental parameters over the world's oceans. In this work, we calculate the uncertainty in retrievals obtained from the Special Sensor Microwave Imager Sounder (SSMIS) instrument caused by uncertainty in the input parameters to the retrieval algorithm. This work applies to the version 7 retrievals of surface wind speed, total column water vapor, total column cloud liquid water, and rain rate produced by Remote Sensing Systems. Our numerical approach allows us to calculate an estimated input-induced uncertainty for every valid retrieval during the SSMIS mission. Our uncertainty estimates are consistent with the differences observed between SSMIS wind speed and vapor measurements made by SSMIS on the F16 and F17 satellites, supporting their accuracy. The estimates do not explain the larger differences between the SSMIS measurements of wind speed and vapor and other sources of these data, consistent with the influence of more sources of uncertainty.
Improved estimates of coordinate error for molecular replacement
Oeffner, Robert D.; BunkÃ³czi, GÃ¡bor; McCoy, Airlie J.; Read, Randy J.
2013-11-01
A function for estimating the effective root-mean-square deviation in coordinates between two proteins has been developed that depends on both the sequence identity and the size of the protein and is optimized for use with molecular replacement in Phaser. A top peak translation-function Z-score of over 8 is found to be a reliable metric of when molecular replacement has succeeded. The estimate of the root-mean-square deviation (r.m.s.d.) in coordinates between the model and the target is an essential parameter for calibrating likelihood functions for molecular replacement (MR). Good estimates of the r.m.s.d. lead to good estimates of the variance term in the likelihood functions, which increases signal to noise and hence success rates in the MR search. Phaser has hitherto used an estimate of the r.m.s.d. that only depends on the sequence identity between the model and target and which was not optimized for the MR likelihood functions. Variance-refinement functionality was added to Phaser to enable determination of the effective r.m.s.d. that optimized the log-likelihood gain (LLG) for a correct MR solution. Variance refinement was subsequently performed on a database of over 21 000 MR problems that sampled a range of sequence identities, protein sizes and protein fold classes. Success was monitored using the translation-function Z-score (TFZ), where a TFZ of 8 or over for the top peak was found to be a reliable indicator that MR had succeeded for these cases with one molecule in the asymmetric unit. Good estimates of the r.m.s.d. are correlated with the sequence identity and the protein size. A new estimate of the r.m.s.d. that uses these two parameters in a function optimized to fit the mean of the refined variance is implemented in Phaser and improves MR outcomes. Perturbing the initial estimate of the r.m.s.d. from the mean of the distribution in steps of standard deviations of the distribution further increases MR success rates.
Nelms, Benjamin E.; Chan, Maria F.; Jarry, Geneviève; Lemire, Matthieu; Lowden, John; Hampton, Carnell
2013-11-15
Purpose: This study (1) examines a variety of real-world cases where systematic errors were not detected by widely accepted methods for IMRT/VMAT dosimetric accuracy evaluation, and (2) drills-down to identify failure modes and their corresponding means for detection, diagnosis, and mitigation. The primary goal of detailing these case studies is to explore different, more sensitive methods and metrics that could be used more effectively for evaluating accuracy of dose algorithms, delivery systems, and QA devices.Methods: The authors present seven real-world case studies representing a variety of combinations of the treatment planning system (TPS), linac, delivery modality, and systematic error type. These case studies are typical to what might be used as part of an IMRT or VMAT commissioning test suite, varying in complexity. Each case study is analyzed according to TG-119 instructions for gamma passing rates and action levels for per-beam and/or composite plan dosimetric QA. Then, each case study is analyzed in-depth with advanced diagnostic methods (dose profile examination, EPID-based measurements, dose difference pattern analysis, 3D measurement-guided dose reconstruction, and dose grid inspection) and more sensitive metrics (2% local normalization/2 mm DTA and estimated DVH comparisons).Results: For these case studies, the conventional 3%/3 mm gamma passing rates exceeded 99% for IMRT per-beam analyses and ranged from 93.9% to 100% for composite plan dose analysis, well above the TG-119 action levels of 90% and 88%, respectively. However, all cases had systematic errors that were detected only by using advanced diagnostic techniques and more sensitive metrics. The systematic errors caused variable but noteworthy impact, including estimated target dose coverage loss of up to 5.5% and local dose deviations up to 31.5%. Types of errors included TPS model settings, algorithm limitations, and modeling and alignment of QA phantoms in the TPS. Most of the errors were correctable after detection and diagnosis, and the uncorrectable errors provided useful information about system limitations, which is another key element of system commissioning.Conclusions: Many forms of relevant systematic errors can go undetected when the currently prevalent metrics for IMRT/VMAT commissioning are used. If alternative methods and metrics are used instead of (or in addition to) the conventional metrics, these errors are more likely to be detected, and only once they are detected can they be properly diagnosed and rooted out of the system. Removing systematic errors should be a goal not only of commissioning by the end users but also product validation by the manufacturers. For any systematic errors that cannot be removed, detecting and quantifying them is important as it will help the physicist understand the limits of the system and work with the manufacturer on improvements. In summary, IMRT and VMAT commissioning, along with product validation, would benefit from the retirement of the 3%/3 mm passing rates as a primary metric of performance, and the adoption instead of tighter tolerances, more diligent diagnostics, and more thorough analysis.
NASA Astrophysics Data System (ADS)
Hamaker, Henry Chris
1995-12-01
Statistical process control (SPC) techniques often use six times the standard deviation sigma to estimate the range of errors within a process. Two assumptions are inherent in this choice of metric for the range: (1) the normal distribution adequately describes the errors, and (2) the fraction of errors falling within plus or minus 3 sigma, about 99.73%, is sufficiently large that we may consider the fraction occurring outside this range to be negligible. In state-of-the-art photomasks, however, the assumption of normality frequently breaks down, and consequently plus or minus 3 sigma is not a good estimate of the range of errors. In this study, we show that improved estimates for the effective maximum error Em, which is defined as the value for which 99.73% of all errors fall within plus or minus Em of the mean mu, may be obtained by quantifying the deviation from normality of the error distributions using the skewness and kurtosis of the error sampling. Data are presented indicating that in laser reticle- writing tools, Em less than or equal to 3 sigma. We also extend this technique for estimating the range of errors to specifications that are usually described by mu plus 3 sigma. The implications for SPC are examined.
Error estimations and their biases in Monte Carlo eigenvalue calculations
Ueki, Taro; Mori, Takamasa; Nakagawa, Masayuki
1997-01-01
In the Monte Carlo eigenvalue calculation of neutron transport, the eigenvalue is calculated as the average of multiplication factors from cycles, which are called the cycle k{sub eff}`s. Biases in the estimators of the variance and intercycle covariances in Monte Carlo eigenvalue calculations are analyzed. The relations among the real and apparent values of variances and intercycle covariances are derived, where real refers to a true value that is calculated from independently repeated Monte Carlo runs and apparent refers to the expected value of estimates from a single Monte Carlo run. Next, iterative methods based on the foregoing relations are proposed to estimate the standard deviation of the eigenvalue. The methods work well for the cases in which the ratios of the real to apparent values of variances are between 1.4 and 3.1. Even in the case where the foregoing ratio is >5, >70% of the standard deviation estimates fall within 40% from the true value.
EIA Corrects Errors in Its Drilling Activity Estimates Series
1998-01-01
The Energy Information Administration (EIA) has published monthly and annual estimates of oil and gas drilling activity since 1978. These data are key information for many industry analysts, serving as a leading indicator of trends in the industry and a barometer of general industry status.
Gap filling strategies and error in estimating annual soil respiration
Technology Transfer Automated Retrieval System (TEKTRAN)
Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...
Implicit polynomial representation through a fast fitting error estimation.
Rouhani, Mohammad; Sappa, Angel Domingo
2012-04-01
This paper presents a simple distance estimation for implicit polynomial fitting. It is computed as the height of a simplex built between the point and the surface (i.e., a triangle in 2-D or a tetrahedron in 3-D), which is used as a coarse but reliable estimation of the orthogonal distance. The proposed distance can be described as a function of the coefficients of the implicit polynomial. Moreover, it is differentiable and has a smooth behavior . Hence, it can be used in any gradient-based optimization. In this paper, its use in a Levenberg-Marquardt framework is shown, which is particularly devoted for nonlinear least squares problems. The proposed estimation is a generalization of the gradient-based distance estimation, which is widely used in the literature. Experimental results, both in 2-D and 3-D data sets, are provided. Comparisons with state-of-the-art techniques are presented, showing the advantages of the proposed approach. PMID:21965211
NASA Technical Reports Server (NTRS)
De Lapparent, V.; Kurtz, M. J.; Geller, M. J.
1986-01-01
Residual errors in the Selder et al. (SSGP) map which caused a break in both the correlation factor (CF) and the filamentary appearance of the Shane-Wirtanen map are examined. These errors, causing a residual rms fluctuation of 11 percent in the SSGP-corrected counts and a systematic rms offset of 8 percent in the mean count per plate, can be attributed to counting pattern and plate vignetting. Techniques for CF reconstruction in catalogs affected by plate-related systematic biases are examined, and it is concluded that accurate restoration may not be possible. Surveys designed to measure the CF at the depth of the SW counts on a scale of 2.5 deg, must have systematic errors of less than or about 0.04 mag.
Error estimation and adaptive mesh refinement for parallel analysis of shell structures
NASA Technical Reports Server (NTRS)
Keating, Scott C.; Felippa, Carlos A.; Park, K. C.
1994-01-01
The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.
A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint
NASA Technical Reports Server (NTRS)
Barth, Timothy
2004-01-01
This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.
A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem
Delaigle, Aurore; Fan, Jianqing; Carroll, Raymond J.
2009-01-01
Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions. PMID:20351800
A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem.
Delaigle, Aurore; Fan, Jianqing; Carroll, Raymond J
2009-03-01
Local polynomial estimators are popular techniques for nonparametric regression estimation and have received great attention in the literature. Their simplest version, the local constant estimator, can be easily extended to the errors-in-variables context by exploiting its similarity with the deconvolution kernel density estimator. The generalization of the higher order versions of the estimator, however, is not straightforward and has remained an open problem for the last 15 years. We propose an innovative local polynomial estimator of any order in the errors-in-variables context, derive its design-adaptive asymptotic properties and study its finite sample performance on simulated examples. We provide not only a solution to a long-standing open problem, but also provide methodological contributions to error-invariable regression, including local polynomial estimation of derivative functions. PMID:20351800
A major error in nomograms for estimating body mass index.
Kahn, H S
1991-09-01
The Surgeon General's Report on Nutrition and Health and Diet and Health include a nomogram for determining body mass index (BMI, in kg/m2) when the subject's weight and height are known. I regret to report that the BMI in nomograms in these books are highly inaccurate when compared with direct calculations of BMI. Anyone wishing to use a nomogram for the rapid estimation of BMI should be cautioned against relying on the versions that appear in these books. PMID:1877500
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Larson, Mats G.
2000-01-01
We consider a posteriori error estimates for finite volume and finite element methods on arbitrary meshes subject to prescribed error functionals. Error estimates of this type are useful in a number of computational settings: (1) quantitative prediction of the numerical solution error, (2) adaptive meshing, and (3) load balancing of work on parallel computing architectures. Our analysis recasts the class of Godunov finite volumes schemes as a particular form of discontinuous Galerkin method utilizing broken space approximation obtained via reconstruction of cell-averaged data. In this general framework, weighted residual error bounds are readily obtained using duality arguments and Galerkin orthogonality. Additional consideration is given to issues such as nonlinearity, efficiency, and the relationship to other existing methods. Numerical examples are given throughout the talk to demonstrate the sharpness of the estimates and efficiency of the techniques. Additional information is contained in the original.
A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Larson, Mats G.; Barth, Timothy J.
1999-01-01
This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.
How well can we estimate error variance of satellite precipitation data around the world?
NASA Astrophysics Data System (ADS)
Gebregiorgis, Abebe S.; Hossain, Faisal
2015-03-01
Providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating the square difference prediction of satellite precipitation (hereafter used synonymously with "error variance") using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. Building on a suite of recent studies that have developed the error variance models, the goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as a function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the error variance of satellite precipitation products. The regression approach yielded good performance skill with high correlation between simulated and observed error variances. The correlation ranged from 0.46 to 0.98 during the independent validation period. In most cases (~ 85% of the scenarios), the correlation was higher than 0.72. The error variance models also captured the spatial distribution of observed error variance adequately for all study regions while producing unbiased residual error. The approach is promising for regions where missed precipitation is not a common occurrence in satellite precipitation estimation. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.
Round-Robin Analysis of Social Interaction: Exact and Estimated Standard Errors.
ERIC Educational Resources Information Center
Bond, Charles F., Jr.; Lashley, Brian R.
1996-01-01
The Social Relations model of D. A. Kenny estimates variances and covariances from a round-robin of two-person interactions. This paper presents a matrix formulation of the Social Relations model, using the formulation to derive exact and estimated standard errors for round-robin estimates of Social Relations parameters. (SLD)
A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings
ERIC Educational Resources Information Center
Lee, Guemin; Lewis, Daniel M.
2008-01-01
The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…
ERIC Educational Resources Information Center
Keuning, Jos; Hemker, Bas
2014-01-01
The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…
NASA Technical Reports Server (NTRS)
Platnick, Steven; Wind, Galina; Xiong, Xiaoxiong
2011-01-01
MODIS retrievals of cloud optical thickness and effective particle radius employ a well-known VNIR/SWIR solar reflectance technique. For this type of algorithm, we evaluate the uncertainty in simultaneous retrievals of these two parameters to pixel-level (scene-dependent) radiometric error estimates as well as other tractable error sources.
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approx...
NASA Astrophysics Data System (ADS)
Smith, Jeffrey C.; Jenkins, J. M.; Van Cleve, J. E.; Kolodziejczak, J.; Twicken, J. D.; Stumpe, M. C.; Fanelli, M. N.
2011-05-01
We present a Bayesian Maximum A Posteriori (MAP) approach to systematic error removal in Kepler photometric data, in which a subset of highly correlated stars is used to establish the range of "reasonable” robust fit parameters, and hence mitigate the loss of astrophysical signal and noise injection on transit time scales (<3d), which afflict Least Squares (LS) fitting. A numerical and empirical approach is taken where the Bayesian Prior PDFs are generated from fits to the light curve distributions themselves versus an analytical approach, which uses a Gaussian fit to the Priors. Along with the systematic effects there are also Sudden Pixel Sensitivity Dropouts (SPSDs) resulting in abrupt steps in the light curves that should be removed. A joint fitting technique is therefore presented that simultaneously applies MAP and SPSD removal. The concept will be illustrated in detail by applying MAP to publicly available Kepler data, and give an overview of its application to all Kepler data collected through the present. We show that the light curve correlation matrix after treatment is diagonal, and present diagnostics such as correlation coefficient histograms, singular value spectra, and principal component plots. The benefits of MAP is shown applied to variable stars with RR Lyrae, harmonic, chaotic, and eclipsing binary waveforms, and examine the impact of MAP on transit waveforms and detectability of transiting planets. We conclude with a discussion of current work on selecting input vectors for the design matrix, generating the Prior PDFs and suppressing high-frequency noise injection with Bandpass Filtering. Funding for this work is provided by the NASA Science Mission Directorate.
StrÃ¶mberg, Sten; Nistor, Mihaela; Liu, Jing
2014-11-15
Highlights: â€¢ The evaluated factors introduce significant systematic errors (10â€“38%) in BMP tests. â€¢ Ambient temperature (T) has the most substantial impact (âˆ¼10%) at low altitude. â€¢ Ambient pressure (p) has the most substantial impact (âˆ¼68%) at high altitude. â€¢ Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factorsâ€™ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factorsâ€™ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.
Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.
ERIC Educational Resources Information Center
Olejnik, Stephen F.; Algina, James
1987-01-01
Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)
Spatio-temporal Error on the Discharge Estimates for the SWOT Mission
NASA Astrophysics Data System (ADS)
Biancamaria, S.; Alsdorf, D. E.; Andreadis, K. M.; Clark, E.; Durand, M.; Lettenmaier, D. P.; Mognard, N. M.; Oudin, Y.; Rodriguez, E.
2008-12-01
The Surface Water and Ocean Topography (SWOT) mission measures two key quantities over rivers: water surface elevation and slope. Water surface elevation from SWOT will have a vertical accuracy, when averaged over approximately one square kilometer, on the order of centimeters. Over reaches from 1-10 km long, SWOT slope measurements will be accurate to microradians. Elevation (depth) and slope offer the potential to produce discharge as a derived quantity. Estimates of instantaneous and temporally integrated discharge from SWOT data will also contain a certain degree of error. Two primary sources of measurement error exist. The first is the temporal sub-sampling of water elevations. For example, SWOT will sample some locations twice in the 21-day repeat cycle. If these two overpasses occurred during flood stage, an estimate of monthly discharge based on these observations would be much higher than the true value. Likewise, if estimating maximum or minimum monthly discharge, in some cases, SWOT may miss those events completely. The second source of measurement error results from the instrument's capability to accurately measure the magnitude of the water surface elevation. How this error affects discharge estimates depends on errors in the model used to derive discharge from water surface elevation. We present a global distribution of estimated relative errors in mean annual discharge based on a power law relationship between stage and discharge. Additionally, relative errors in integrated and average instantaneous monthly discharge associated with temporal sub-sampling over the proposed orbital tracks are presented for several river basins.
NASA Astrophysics Data System (ADS)
Locatelli, R.; Bousquet, P.; Chevallier, F.; Fortems-Cheney, A.; Szopa, S.; Saunois, M.; Agusti-Panareda, A.; Bergmann, D.; Bian, H.; Cameron-Smith, P.; Chipperfield, M. P.; Gloor, E.; Houweling, S.; Kawa, S. R.; Krol, M.; Patra, P. K.; Prinn, R. G.; Rigby, M.; Saito, R.; Wilson, C.
2013-04-01
A modelling experiment has been conceived to assess the impact of transport model errors on the methane emissions estimated by an atmospheric inversion system. Synthetic methane observations, given by 10 different model outputs from the international TransCom-CH4 model exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the PYVAR-LMDZ-SACS inverse system to produce 10 different methane emission estimates at the global scale for the year 2005. The same set-up has been used to produce the synthetic observations and to compute flux estimates by inverse modelling, which means that only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. In our framework, we show that transport model errors lead to a discrepancy of 27 Tg CH4 per year at the global scale, representing 5% of the total methane emissions. At continental and yearly scales, transport model errors have bigger impacts depending on the region, ranging from 36 Tg CH4 in north America to 7 Tg CH4 in Boreal Eurasian (from 23% to 48%). At the model gridbox scale, the spread of inverse estimates can even reach 150% of the prior flux. Thus, transport model errors contribute to significant uncertainties on the methane estimates by inverse modelling, especially when small spatial scales are invoked. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher resolution models. The analysis of methane estimated fluxes in these different configurations questions the consistency of transport model errors in current inverse systems. For future methane inversions, an improvement in the modelling of the atmospheric transport would make the estimations more accurate. Likewise, errors of the observation covariance matrix should be more consistently prescribed in future inversions in order to limit the impact of transport model errors on estimated methane fluxes.
Estimation of Error in Western Pacific Geoid Heights Derived from Gravity Data Only
NASA Astrophysics Data System (ADS)
Peters, M. F.; Brozena, J. M.
2012-12-01
The goal of the Western Pacific Geoid estimation project was to generate geoid height models for regions in the Western Pacific Ocean, and formal error estimates for those geoid heights, using all available gravity data and statistical parameters of the quality of the gravity data. Geoid heights were to be determined solely from gravity measurements, as a gravimetric geoid model and error estimates for that model would have applications in oceanography and satellite altimetry. The general method was to remove the gravity field associated with a "lower" order spherical harmonic global gravity model from the regional gravity set; to fit a covariance model to the residual gravity, and then calculate the (residual) geoid heights and error estimates by least-squares collocation fit with residual gravity, available statistical estimates of the gravity and the covariance model. The geoid heights corresponding to the lower order spherical harmonic model can be added back to the heights from the residual gravity to produce a complete geoid height model. As input we requested from NGA all unclassified available gravity data in the western Pacific between 15Â° to 45Â° N and 105Â° to 141Â°W. The total data set that was used to model and estimate errors in gravimetric geoid comprised an unclassified, open file data set (540,012 stations), a proprietary airborne survey of Taiwan (19,234 stations), and unclassified NAVO SSP survey data (95,111 stations), for official use only. Various programs were adapted to the problem including N.K. Pavlis' HSYNTH program and the covariance fit program GPFIT and least-squares collocation program GPCOL from the GRAVSOFT package (Forsberg and Schering, 2008 version) which were modified to handle larger data sets, but in some regions data were still too numerous. Formulas were derived that could be used to block-mean the data in a statistically optimal sense and still retain the error estimates required for the collocation algorithm. Running the covariance fit and collocation on discrete blocks revealed an edge effect on the covariance parameter calculation that produced stepwise discontinuities in the error estimates. To eliminate this, the covariance estimation procedure program was modified to slide along a lattice or grid (defined at runtime) of points, selecting all stations closer than a user defined distance with an error estimate of 5 mGals standard deviation or better from the larger regional data set, and calculating covariance parameters for that location. The collocation program was modified to use these locations and GPFIT parameters, and to select all stations within a close radius, and block mean data with associated error estimates beyond that, to calculate a residual height and error estimates on a grid centered at the covariance fit location. These grids were combined to produce the overall geoid height and error estimate sets. The error estimates, in meters, are plotted as a color-filled contour map masked by land regions. Lack of gravity data causes the area of high estimated error east of the Korean peninsula. The high estimates of error north-west of Taiwan are due not to a lack of data, but rather data with high internal estimates of measurement error or disagreement between different data sets. The tracking visible is the effect of high quality data to reduce errors in gravimetric geoid height models.
Error covariance calculation for forecast bias estimation in hydrologic data assimilation
NASA Astrophysics Data System (ADS)
Pauwels, Valentijn R. N.; De Lannoy, GabriÃ«lle J. M.
2015-12-01
To date, an outstanding issue in hydrologic data assimilation is a proper way of dealing with forecast bias. A frequently used method to bypass this problem is to rescale the observations to the model climatology. While this approach improves the variability in the modeled soil wetness and discharge, it is not designed to correct the results for any bias. Alternatively, attempts have been made towards incorporating dynamic bias estimates into the assimilation algorithm. Persistent bias models are most often used to propagate the bias estimate, where the a priori forecast bias error covariance is calculated as a constant fraction of the unbiased a priori state error covariance. The latter approach is a simplification to the explicit propagation of the bias error covariance. The objective of this paper is to examine to which extent the choice for the propagation of the bias estimate and its error covariance influence the filter performance. An Observation System Simulation Experiment (OSSE) has been performed, in which ground water storage observations are assimilated into a biased conceptual hydrologic model. The magnitudes of the forecast bias and state error covariances are calibrated by optimizing the innovation statistics of groundwater storage. The obtained bias propagation models are found to be identical to persistent bias models. After calibration, both approaches for the estimation of the forecast bias error covariance lead to similar results, with a realistic attribution of error variances to the bias and state estimate, and significant reductions of the bias in both the estimates of groundwater storage and discharge. Overall, the results in this paper justify the use of the traditional approach for online bias estimation with a persistent bias model and a simplified forecast bias error covariance estimation.
Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.
2006-10-01
This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.
Identification of Errors in 3D Building Models by a Robust Camera Pose Estimation
NASA Astrophysics Data System (ADS)
Iwaszczuk, D.; Stilla, U.
2014-08-01
This paper presents a method for identification of errors in 3D building models which are results of inaccurate creation process. Error detection is carried out within the camera pose estimation. As observations, parameters of the building corners and of the line segments detected in the image are used and conditions for the coplanarity of corresponding edges are defined. For the estimation, the uncertainty of the 3D building models and image features are taken into account.
Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri
2009-01-01
We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.
NASA Astrophysics Data System (ADS)
Frederiksen, Jorgen S.; Dix, Martin R.; Kepert, Steven M.
1996-03-01
Systematic kinetic energy errors are examined in barotropic and multilevel general circulation models. The dependence of energy spectra on resolution and dissipation and, in addition for the barotropic model, on topography and the beta effect, is studied. We propose explanations for the behavior of simulated kinetic energy spectra by relating them to canonical equilibrium spectra characterized by entropy maximization. Equilibrium spectra at increased resolution tend to have increased large-scale kinetic energy and a drop in amplitude at intermediate and small scales. This qualitative behavior may also be found in forced and/or dissipative simulations if the forcing and dissipation operators acting on the common scales are very similar at different resolutions.An explanation for the tail `wagging the dog' effect is presented. This effect, where scale-selective dissipation operators cause a drop in the tail of the energy spectra and, surprisingly, also an increase in the large-scale energy, is found to occur in both barotropic and multilevel general circulation models. It is shown to rely on the dissipation operators dissipating enstrophy while leaving the total kinetic energy approximately conserved.A new (short time) canonical equilibrium model and explanation of zonalization due to the beta-effect is presented; the meridionally elongated large-scale waves are regarded as adiabatic invariants, while the zonal flow and other eddies interact and equilibrate on a short timescale.
Basis set limit and systematic errors in local-orbital based all-electron DFT
NASA Astrophysics Data System (ADS)
Blum, Volker; Behler, Jörg; Gehrke, Ralf; Reuter, Karsten; Scheffler, Matthias
2006-03-01
With the advent of efficient integration schemes,^1,2 numeric atom-centered orbitals (NAO's) are an attractive basis choice in practical density functional theory (DFT) calculations of nanostructured systems (surfaces, clusters, molecules). Though all-electron, the efficiency of practical implementations promises to be on par with the best plane-wave pseudopotential codes, while having a noticeably higher accuracy if required: Minimal-sized effective tight-binding like calculations and chemically accurate all-electron calculations are both possible within the same framework; non-periodic and periodic systems can be treated on equal footing; and the localized nature of the basis allows in principle for O(N)-like scaling. However, converging an observable with respect to the basis set is less straightforward than with competing systematic basis choices (e.g., plane waves). We here investigate the basis set limit of optimized NAO basis sets in all-electron calculations, using as examples small molecules and clusters (N2, Cu2, Cu4, Cu10). meV-level total energy convergence is possible using <=50 basis functions per atom in all cases. We also find a clear correlation between the errors which arise from underconverged basis sets, and the system geometry (interatomic distance). ^1 B. Delley, J. Chem. Phys. 92, 508 (1990), ^2 J.M. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002).
Comparison of weak lensing by NFW and Einasto halos and systematic errors
NASA Astrophysics Data System (ADS)
Sereno, Mauro; Fedeli, Cosimo; Moscardini, Lauro
2016-01-01
Recent N-body simulations have shown that Einasto radial profiles provide the most accurate description of dark matter halos. Predictions based on the traditional NFW functional form may fail to describe the structural properties of cosmic objects at the percent level required by precision cosmology. We computed the systematic errors expected for weak lensing analyses of clusters of galaxies if one wrongly models the lens density profile. Even though the NFW fits of observed tangential shear profiles can be excellent, viral masses and concentrations of very massive halos (gtrsim 1015Msolar/h) can be over- and underestimated by 0~ 1 per cent, respectively. Misfitting effects also steepen the observed mass-concentration relation, as observed in multi-wavelength observations of galaxy groups and clusters. Based on shear analyses, Einasto and NFW halos can be set apart either with deep observations of exceptionally massive structures (gtrsim 2×1015Msolar/h) or by stacking the shear profiles of thousands of group-sized lenses (gtrsim 1014Msolar/h).
Reducing systematic errors in time-frequency resolved mode number analysis
NASA Astrophysics Data System (ADS)
Horváth, L.; Poloskei, P. Zs; Papp, G.; Maraschek, M.; Schuhbeck, K. H.; Pokol, G. I.; the EUROfusion MST1 Team; the ASDEX Upgrade Team
2015-12-01
The present paper describes the effect of magnetic pick-up coil transfer functions on mode number analysis in magnetically confined fusion plasmas. Magnetic probes mounted inside the vacuum chamber are widely used to characterize the mode structure of magnetohydrodynamic modes, as, due to their relative simplicity and compact nature, several coils can be distributed over the vessel. Phase differences between the transfer functions of different magnetic pick-up coils lead to systematic errors in time- and frequency resolved mode number analysis. This paper presents the first in situ, end-to-end calibration of a magnetic pick-up coil system which was carried out by using an in-vessel driving coil on ASDEX Upgrade. The effect of the phase differences in the pick-up coil transfer functions is most significant in the 50–250?kHz frequency range, where the relative phase shift between the different probes can be up to 1 radian (?60°). By applying a correction based on the transfer functions we found smaller residuals of mode number fitting in the considered discharges. In most cases an order of magnitude improvement was observed in the residuals of the mode number fits, which could open the way to investigate weaker electromagnetic oscillations with even high mode numbers.
Accounting for systematic errors in bioluminescence imaging to improve quantitative accuracy
NASA Astrophysics Data System (ADS)
Taylor, Shelley L.; Perry, Tracey A.; Styles, Iain B.; Cobbold, Mark; Dehghani, Hamid
2015-07-01
Bioluminescence imaging (BLI) is a widely used pre-clinical imaging technique, but there are a number of limitations to its quantitative accuracy. This work uses an animal model to demonstrate some significant limitations of BLI and presents processing methods and algorithms which overcome these limitations, increasing the quantitative accuracy of the technique. The position of the imaging subject and source depth are both shown to affect the measured luminescence intensity. Free Space Modelling is used to eliminate the systematic error due to the camera/subject geometry, removing the dependence of luminescence intensity on animal position. Bioluminescence tomography (BLT) is then used to provide additional information about the depth and intensity of the source. A substantial limitation in the number of sources identified using BLI is also presented. It is shown that when a given source is at a significant depth, it can appear as multiple sources when imaged using BLI, while the use of BLT recovers the true number of sources present.
Carroll, Raymond J.; Chen, Xiaohong; Hu, Yingyao
2010-01-01
This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest — the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates — is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach. PMID:20495685
Solving large tomographic linear systems: size reduction and error estimation
NASA Astrophysics Data System (ADS)
Voronin, Sergey; Mikesell, Dylan; Slezak, Inna; Nolet, Guust
2014-10-01
We present a new approach to reduce a sparse, linear system of equations associated with tomographic inverse problems. We begin by making a modification to the commonly used compressed sparse-row format, whereby our format is tailored to the sparse structure of finite-frequency (volume) sensitivity kernels in seismic tomography. Next, we cluster the sparse matrix rows to divide a large matrix into smaller subsets representing ray paths that are geographically close. Singular value decomposition of each subset allows us to project the data onto a subspace associated with the largest eigenvalues of the subset. After projection we reject those data that have a signal-to-noise ratio (SNR) below a chosen threshold. Clustering in this way assures that the sparse nature of the system is minimally affected by the projection. Moreover, our approach allows for a precise estimation of the noise affecting the data while also giving us the ability to identify outliers. We illustrate the method by reducing large matrices computed for global tomographic systems with cross-correlation body wave delays, as well as with surface wave phase velocity anomalies. For a massive matrix computed for 3.7 million Rayleigh wave phase velocity measurements, imposing a threshold of 1 for the SNR, we condensed the matrix size from 1103 to 63 Gbyte. For a global data set of multiple-frequency P wave delays from 60 well-distributed deep earthquakes we obtain a reduction to 5.9 per cent. This type of reduction allows one to avoid loss of information due to underparametrizing models. Alternatively, if data have to be rejected to fit the system into computer memory, it assures that the most important data are preserved.
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.
Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco
2011-01-01
The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion. PMID:22255940
How Well Can We Estimate Error Variance of Satellite Precipitation Data Around the World?
NASA Astrophysics Data System (ADS)
Gebregiorgis, A. S.; Hossain, F.
2014-12-01
The traditional approach to measuring precipitation by placing a probe on the ground will likely never be adequate or affordable in most parts of the world. Fortunately, satellites today provide a continuous global bird's-eye view (above ground) at any given location. However, the usefulness of such precipitation products for hydrological applications depends on their error characteristics. Thus, providing error information associated with existing satellite precipitation estimates is crucial to advancing applications in hydrologic modeling. In this study, we present a method of estimating satellite precipitation error variance using regression model for three satellite precipitation products (3B42RT, CMORPH, and PERSIANN-CCS) using easily available geophysical features and satellite precipitation rate. The goal of this work is to explore how well the method works around the world in diverse geophysical settings. Topography, climate, and seasons are considered as the governing factors to segregate the satellite precipitation uncertainty and fit a nonlinear regression equation as function of satellite precipitation rate. The error variance models were tested on USA, Asia, Middle East, and Mediterranean region. Rain-gauge based precipitation product was used to validate the errors variance of satellite precipitation products. Our study attests that transferability of model estimators (which help to estimate the error variance) from one region to another is practically possible by leveraging the similarity in geophysical features. Therefore, the quantitative picture of satellite precipitation error over ungauged regions can be discerned even in the absence of ground truth data.
Soil Moisture Background Error Covariance Estimation in a Land-Atmosphere Coupled Model
NASA Astrophysics Data System (ADS)
Lin, L. F.; Ebtehaj, M.; Flores, A. N.; Wang, J.; Bras, R. L.
2014-12-01
The objective of this study is to estimate space-time dynamics of the soil moisture background error in a coupled land-atmosphere model for better understanding the land-atmosphere interactions and soil moisture dynamics through data assimilation. To this end, we conducted forecast experiments in eight calendar years from 2006 to 2013 using the Weather Research and Forecasting (WRF) model coupled with the Noah land surface model and estimated the background error statistics based on the National Meteorological Center (NMC) methodology. All the WRF-Noah simulations were initialized with the National Centers for Environmental Prediction (NCEP) FNL operational global analysis dataset. In our study domain, covering the contiguous United States, the results show that the soil moisture background error exhibits strong seasonal and regional patterns, with the highest magnitude occurring during the summer at the top soil layer over most regions of the Great Plains. It is also revealed that the soil moisture background errors are strongly biased at some regions, especially Southeastern United States, and bias impacts the magnitude of the error from top to bottom soil layer in an increasing order. Moreover, we also found that the estimated background error is not sensitive to the selection of WRF physics schemes of microphysics, cumulus parameterization, and land surface model. Overall, this study enhances our understanding on the space-time variability of the soil moisture background error and promises more accurate land-surface state estimates via variational data assimilation.
An estimate of asthma prevalence in Africa: a systematic analysis
Adeloye, Davies; Chan, Kit Yee; Rudan, Igor; Campbell, Harry
2013-01-01
Aim To estimate and compare asthma prevalence in Africa in 1990, 2000, and 2010 in order to provide information that will help inform the planning of the public health response to the disease. Methods We conducted a systematic search of Medline, EMBASE, and Global Health for studies on asthma published between 1990 and 2012. We included cross-sectional population based studies providing numerical estimates on the prevalence of asthma. We calculated weighted mean prevalence and applied an epidemiological model linking age with the prevalence of asthma. The UN population figures for Africa for 1990, 2000, and 2010 were used to estimate the cases of asthma, each for the respective year. Results Our search returned 790 studies. We retained 45 studies that met our selection criteria. In Africa in 1990, we estimated 34.1 million asthma cases (12.1%; 95% confidence interval [CI] 7.2-16.9) among children <15 years, 64.9 million (11.8%; 95% CI 7.9-15.8) among people aged <45 years, and 74.4 million (11.7%; 95% CI 8.2-15.3) in the total population. In 2000, we estimated 41.3 million cases (12.9%; 95% CI 8.7-17.0) among children <15 years, 82.4 million (12.5%; 95% CI 5.9-19.1) among people aged <45 years, and 94.8 million (12.0%; 95% CI 5.0-18.8) in the total population. This increased to 49.7 million (13.9%; 95% CI 9.6-18.3) among children <15 years, 102.9 million (13.8%; 95% CI 6.2-21.4) among people aged <45 years, and 119.3 million (12.8%; 95% CI 8.2-17.1) in the total population in 2010. There were no significant differences between asthma prevalence in studies which ascertained cases by written and video questionnaires. Crude prevalences of asthma were, however, consistently higher among urban than rural dwellers. Conclusion Our findings suggest an increasing prevalence of asthma in Africa over the past two decades. Due to the paucity of data, we believe that the true prevalence of asthma may still be under-estimated. There is a need for national governments in Africa to consider the implications of this increasing disease burden and to investigate the relative importance of underlying risk factors such as rising urbanization and population aging in their policy and health planning responses to this challenge. PMID:24382846
Explicit a posteriori error estimates for eigenvalue analysis of heterogeneous elastic structures.
Walsh, Timothy Francis; Reese, Garth M.; Hetmaniuk, Ulrich L.
2005-07-01
An a posteriori error estimator is developed for the eigenvalue analysis of three-dimensional heterogeneous elastic structures. It constitutes an extension of a well-known explicit estimator to heterogeneous structures. We prove that our estimates are independent of the variations in material properties and independent of the polynomial degree of finite elements. Finally, we study numerically the effectivity of this estimator on several model problems.
Research on Parameter Estimation Methods for Alpha Stable Noise in a Laser Gyroscope's Random Error.
Wang, Xueyun; Li, Kui; Gao, Pengyu; Meng, Suxia
2015-01-01
Alpha stable noise, determined by four parameters, has been found in the random error of a laser gyroscope. Accurate estimation of the four parameters is the key process for analyzing the properties of alpha stable noise. Three widely used estimation methods-quantile, empirical characteristic function (ECF) and logarithmic moment method-are analyzed in contrast with Monte Carlo simulation in this paper. The estimation accuracy and the application conditions of all methods, as well as the causes of poor estimation accuracy, are illustrated. Finally, the highest precision method, ECF, is applied to 27 groups of experimental data to estimate the parameters of alpha stable noise in a laser gyroscope's random error. The cumulative probability density curve of the experimental data fitted by an alpha stable distribution is better than that by a Gaussian distribution, which verifies the existence of alpha stable noise in a laser gyroscope's random error. PMID:26230698
A posteriori error estimation for hp -adaptivity for fourth-order equations
NASA Astrophysics Data System (ADS)
Moore, Peter K.; Rangelova, Marina
2010-04-01
A posteriori error estimates developed to drive hp-adaptivity for second-order reaction-diffusion equations are extended to fourth-order equations. A C^1 hierarchical finite element basis is constructed from Hermite-Lobatto polynomials. A priori estimates of the error in several norms for both the interpolant and finite element solution are derived. In the latter case this requires a generalization of the well-known Aubin-Nitsche technique to time-dependent fourth-order equations. We show that the finite element solution and corresponding Hermite-Lobatto interpolant are asymptotically equivalent. A posteriori error estimators based on this equivalence for solutions at two orders are presented. Both are shown to be asymptotically exact on grids of uniform order. These estimators can be used to control various adaptive strategies. Computational results for linear steady-state and time-dependent equations corroborate the theory and demonstrate the effectiveness of the estimators in adaptive settings.
NASA Astrophysics Data System (ADS)
Ammons, S. M.; Neichel, Benoit; Lu, Jessica; Gavel, Donald T.; Srinath, Srikar; McGurk, Rosalie; Rudy, Alex; Rockosi, Connie; Marois, Christian; Macintosh, Bruce; Savransky, Dmitry; Galicher, Raphael; Bendek, Eduardo; Guyon, Olivier; Marin, Eduardo; Garrel, Vincent; Sivo, Gaetano
2014-08-01
We measure the long-term systematic component of the astrometric error in the GeMS MCAO system as a function of field radius and Ks magnitude. The experiment uses two epochs of observations of NGC 1851 separated by one month. The systematic component is estimated for each of three field of view cases (15'' radius, 30'' radius, and full field) and each of three distortion correction schemes: 8 DOF/chip + local distortion correction (LDC), 8 DOF/chip with no LDC, and 4 DOF/chip with no LDC. For bright, unsaturated stars with 13 < Ks < 16, the systematic component is < 0.2, 0.3, and 0.4 mas, respectively, for the 15'' radius, 30'' radius, and full field cases, provided that an 8 DOF/chip distortion correction with LDC (for the full-field case) is used to correct distortions. An 8 DOF/chip distortion-correction model always outperforms a 4 DOF/chip model, at all field positions and magnitudes and for all field-of-view cases, indicating the presence of high-order distortion changes. Given the order of the models needed to correct these distortions (~8 DOF/chip or 32 degrees of freedom total), it is expected that at least 25 stars per square arcminute would be needed to keep systematic errors at less than 0.3 milliarcseconds for multi-year programs. We also estimate the short-term astrometric precision of the newly upgraded Shane AO system with undithered M92 observations. Using a 6-parameter linear transformation to register images, the system delivers ~0.3 mas astrometric error over short-term observations of 2-3 minutes.
NASA Astrophysics Data System (ADS)
Xue, Haile; Shen, Xueshun; Chou, Jifan
2015-11-01
An online systematic error correction is presented and examined as a technique to improve the accuracy of real-time numerical weather prediction, based on the dataset of model errors (MEs) in past intervals. Given the analyses, the ME in each interval (6 h) between two analyses can be iteratively obtained by introducing an unknown tendency term into the prediction equation, shown in Part I of this two-paper series. In this part, after analyzing the 5-year (2001-2005) GRAPES-GFS (Global Forecast System of the Global and Regional Assimilation and Prediction System) error patterns and evolution, a systematic model error correction is given based on the least-squares approach by firstly using the past MEs. To test the correction, we applied the approach in GRAPES-GFS for July 2009 and January 2010. The datasets associated with the initial condition and SST used in this study were based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results indicated that the Northern Hemispheric systematically underestimated equator-to-pole geopotential gradient and westerly wind of GRAPES-GFS were largely enhanced, and the biases of temperature and wind in the tropics were strongly reduced. Therefore, the correction results in a more skillful forecast with lower mean bias and root-mean-square error and higher anomaly correlation coefficient.
NASA Astrophysics Data System (ADS)
Williams, Brandon Riley
Computational fluid dynamics (CFD) has become a widely used tool in research and engineering for the study of a wide variety of problems. However, confidence in CFD solutions is still dependent on comparisons with experimental data. In order for CFD to become a trusted resource, a quantitative measure of error must be provided for each generated solution. Although there are several sources of error, the effects of the resolution and quality of the computational grid are difficult to predict a priori. This grid-induced error is most often attenuated by performing a grid refinement study or using solution adaptive grid refinement. While these methods are effective, they can also be computationally expensive and even impractical for large, complex problems. This work presents a method for estimating the grid-induced error in CFD solutions of the Navier-Stokes and Euler equations using a single grid and solution or a series of increasingly finer grids and solutions. The method is based on the discrete error transport equation (DETE), which is derived directly from the discretized PDE and provides a value of the error at every cell in the computational grid. The DETE is developed for two-dimensional, laminar Navier-Stokes and Euler equations within a generalized unstructured finite volume scheme, such that an extension to three dimensions and turbulent flow would follow the same approach. The usefulness of the DETE depends on the accuracy with which the source term, the grid-induced residual, can be modeled. Three different models for the grid-induced residual were developed: the AME model, the PDE model, and the extrapolation model. The AME model consists of the leading terms of the remainder of a simplified modified equation. The PDE model creates a polynomial fit of the CFD solution and then uses the original PDE in differential form to calculate the residual. Both the AME and PDE are used with a single grid and solution. The extrapolation model uses a fine grid solution to calculate the grid-induced residual on the coarse grid and then extrapolates that residual back to the fine grid. The DETE and residual models were then evaluated for four flow problems: (1) steady flow past a circular cylinder; (2) steady, transonic flow past an airfoil; (3) unsteady flow of an isentropic vortex; (4) unsteady flow past a circular cylinder with vortex shedding. Results demonstrate the fidelity of the DETE with each residual model as well as usefulness of the DETE as a tool for predicting the grid-induced error in CFD solutions.
B-spline goal-oriented error estimators for geometrically nonlinear rods
NASA Astrophysics Data System (ADS)
DedÃ¨, L.; Santos, H. A. F. A.
2012-01-01
We consider goal-oriented a posteriori error estimators for the evaluation of the errors on quantities of interest associated with the solution of geometrically nonlinear curved elastic rods. For the numerical solution of these nonlinear one-dimensional problems, we adopt a B-spline based Galerkin method, a particular case of the more general isogeometric analysis. We propose error estimators using higher order "enhanced" solutions, which are based on the concept of enrichment of the original B-spline basis by means of the "pure" k-refinement procedure typical of isogeometric analysis. We provide several numerical examples for linear and nonlinear output functionals, corresponding to the rotation, displacements and strain energy of the rod, and we compare the effectiveness of the proposed error estimators.
NASA Astrophysics Data System (ADS)
Demlow, Alan
2007-03-01
We prove local a posteriori error estimates for pointwise gradient errors in finite element methods for a second-order linear elliptic model problem. First we split the local gradient error into a computable local residual term and a weaker global norm of the finite element error (the ``pollution term''). Using a mesh-dependent weight, the residual term is bounded in a sharply localized fashion. In specific situations the pollution term may also be bounded by computable residual estimators. On nonconvex polygonal and polyhedral domains in two and three space dimensions, we may choose estimators for the pollution term which do not employ specific knowledge of corner singularities and which are valid on domains with cracks. The finite element mesh is only required to be simplicial and shape-regular, so that highly graded and unstructured meshes are allowed.
Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; PÃ©rez, Ana R.; Aja, Beatriz; TerÃ¡n, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; GÃ©nova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; PÃ©rez, Ana R; Aja, Beatriz; TerÃ¡n, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; GÃ©nova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Doolan, P; Dias, M; Collins Fekete, C; Seco, J
2014-06-01
Purpose: The procedure for proton treatment planning involves the conversion of the patient's X-ray CT from Hounsfield units into relative stopping powers (RSP), using a stoichiometric calibration curve (Schneider 1996). In clinical practice a 3.5% margin is added to account for the range uncertainty introduced by this process and other errors. RSPs for real tissues are calculated using composition data and the Bethe-Bloch formula (ICRU 1993). The purpose of this work is to investigate the impact that systematic errors in the stoichiometric calibration have on the proton range. Methods: Seven tissue inserts of the Gammex 467 phantom were imaged using our CT scanner. Their known chemical compositions (Watanabe 1999) were then used to calculate the theoretical RSPs, using the same formula as would be used for human tissues in the stoichiometric procedure. The actual RSPs of these inserts were measured using a Bragg peak shift measurement in the proton beam at our institution. Results: The theoretical calculation of the RSP was lower than the measured RSP values, by a mean/max error of - 1.5/-3.6%. For all seven inserts the theoretical approach underestimated the RSP, with errors variable across the range of Hounsfield units. Systematic errors for lung (average of two inserts), adipose and cortical bone were - 3.0/-2.1/-0.5%, respectively. Conclusion: There is a systematic underestimation caused by the theoretical calculation of RSP; a crucial step in the stoichiometric calibration procedure. As such, we propose that proton calibration curves should be based on measured RSPs. Investigations will be made to see if the same systematic errors exist for biological tissues. The impact of these differences on the range of proton beams, for phantoms and patient scenarios, will be investigated. This project was funded equally by the Engineering and Physical Sciences Research Council (UK) and Ion Beam Applications (Louvain-La-Neuve, Belgium)
NASA Astrophysics Data System (ADS)
Miller, B.; O'Shaughnessy, R.; Littenberg, T. B.; Farr, B.
2015-08-01
Reliable low-latency gravitational wave parameter estimation is essential to target limited electromagnetic follow-up facilities toward astrophysically interesting and electromagnetically relevant sources of gravitational waves. In this study, we examine the trade-off between speed and accuracy. Specifically, we estimate the astrophysical relevance of systematic errors in the posterior parameter distributions derived using a fast-but-approximate waveform model, SpinTaylorF2 (stf2), in parameter estimation with lalinference_mcmc. Though efficient, the stf2 approximation to compact binary inspiral employs approximate kinematics (e.g., a single spin) and an approximate waveform (e.g., frequency domain versus time domain). More broadly, using a large astrophysically motivated population of generic compact binary merger signals, we report on the effectualness and limitations of this single-spin approximation as a method to infer parameters of generic compact binary sources. For most low-mass compact binary sources, we find that the stf2 approximation estimates compact binary parameters with biases comparable to systematic uncertainties in the waveform. We illustrate by example the effect these systematic errors have on posterior probabilities most relevant to low-latency electromagnetic follow-up: whether the secondary has a mass consistent with a neutron star (NS); whether the masses, spins, and orbit are consistent with that neutron star's tidal disruption; and whether the binary's angular momentum axis is oriented along the line of sight.
NASA Astrophysics Data System (ADS)
Erdal, D.; Neuweiler, I.; Huisman, J. A.
2012-06-01
Estimates of effective parameters for unsaturated flow models are typically based on observations taken on length scales smaller than the modeling scale. This complicates parameter estimation for heterogeneous soil structures. In this paper we attempt to account for soil structure not present in the flow model by using so-called external error models, which correct for bias in the likelihood function of a parameter estimation algorithm. The performance of external error models are investigated using data from three virtual reality experiments and one real world experiment. All experiments are multistep outflow and inflow experiments in columns packed with two sand types with different structures. First, effective parameters for equivalent homogeneous models for the different columns were estimated using soil moisture measurements taken at a few locations. This resulted in parameters that had a low predictive power for the averaged states of the soil moisture if the measurements did not adequately capture a representative elementary volume of the heterogeneous soil column. Second, parameter estimation was performed using error models that attempted to correct for bias introduced by soil structure not taken into account in the first estimation. Three different error models that required different amounts of prior knowledge about the heterogeneous structure were considered. The results showed that the introduction of an error model can help to obtain effective parameters with more predictive power with respect to the average soil water content in the system. This was especially true when the dynamic behavior of the flow process was analyzed.
O'Brien, S.; Azmy, Y. Y.
2013-07-01
When calculating numerical solutions of the neutron transport equation it is important to have a measure of the accuracy of the solution. As the true solution is generally not known, a suitable estimation of the error must be made. The steady state transport equation possesses discretization errors in all its independent variables: angle, energy and space. In this work only spatial discretization errors are considered. An exact transport solution, in which the degree of regularity of the exact flux across the singular characteristic is controlled, is manufactured to determine the numerical solutions true discretization error. This solution is then projected onto a Legendre polynomial space in order to form an exact solution on the same basis space as the numerical solution, Discontinuous Galerkin Finite Element Method (DGFEM), to enable computation of the true error. Over a series of test problems the true error is compared to the error estimated by: Ragusa and Wang (RW), residual source (LER) and cell discontinuity estimators (JD). The validity and accuracy of the considered estimators are primarily assessed by considering the effectivity index and global L2 norm of the error. In general RW excels at approximating the true error distribution but usually under-estimates its magnitude; the LER estimator emulates the true error distribution but frequently over-estimates the magnitude of the true error; the JD estimator poorly captures the true error distribution and generally under-estimates the error about singular characteristics but over-estimates it elsewhere. (authors)
Measurement Error in Nonparametric Item Response Curve Estimation. Research Report. ETS RR-11-28
ERIC Educational Resources Information Center
Guo, Hongwen; Sinharay, Sandip
2011-01-01
Nonparametric, or kernel, estimation of item response curve (IRC) is a concern theoretically and operationally. Accuracy of this estimation, often used in item analysis in testing programs, is biased when the observed scores are used as the regressor because the observed scores are contaminated by measurement error. In this study, we investigate…
Improved Margin of Error Estimates for Proportions in Business: An Educational Example
ERIC Educational Resources Information Center
Arzumanyan, George; Halcoussis, Dennis; Phillips, G. Michael
2015-01-01
This paper presents the Agresti & Coull "Adjusted Wald" method for computing confidence intervals and margins of error for common proportion estimates. The presented method is easily implementable by business students and practitioners and provides more accurate estimates of proportions particularly in extreme samples and smallâ€¦
NASA Technical Reports Server (NTRS)
Kirstettier, Pierre-Emmanual; Honh, Y.; Gourley, J. J.; Chen, S.; Flamig, Z.; Zhang, J.; Howard, K.; Schwaller, M.; Petersen, W.; Amitai, E.
2011-01-01
Characterization of the error associated to satellite rainfall estimates is a necessary component of deterministic and probabilistic frameworks involving space-born passive and active microwave measurement") for applications ranging from water budget studies to forecasting natural hazards related to extreme rainfall events. We focus here on the error structure of NASA's Tropical Rainfall Measurement Mission (TRMM) Precipitation Radar (PR) quantitative precipitation estimation (QPE) at ground. The problem is addressed by comparison of PR QPEs with reference values derived from ground-based measurements using NOAA/NSSL ground radar-based National Mosaic and QPE system (NMQ/Q2). A preliminary investigation of this subject has been carried out at the PR estimation scale (instantaneous and 5 km) using a three-month data sample in the southern part of US. The primary contribution of this study is the presentation of the detailed steps required to derive trustworthy reference rainfall dataset from Q2 at the PR pixel resolution. It relics on a bias correction and a radar quality index, both of which provide a basis to filter out the less trustworthy Q2 values. Several aspects of PR errors arc revealed and quantified including sensitivity to the processing steps with the reference rainfall, comparisons of rainfall detectability and rainfall rate distributions, spatial representativeness of error, and separation of systematic biases and random errors. The methodology and framework developed herein applies more generally to rainfall rate estimates from other sensors onboard low-earth orbiting satellites such as microwave imagers and dual-wavelength radars such as with the Global Precipitation Measurement (GPM) mission.
Creel, Scott; Spong, Goran; Sands, Jennifer L; Rotella, Jay; Zeigle, Janet; Joe, Lawrence; Murphy, Kerry M; Smith, Douglas
2003-07-01
Determining population sizes can be difficult, but is essential for conservation. By counting distinct microsatellite genotypes, DNA from noninvasive samples (hair, faeces) allows estimation of population size. Problems arise because genotypes from noninvasive samples are error-prone, but genotyping errors can be reduced by multiple polymerase chain reaction (PCR). For faecal genotypes from wolves in Yellowstone National Park, error rates varied substantially among samples, often above the 'worst-case threshold' suggested by simulation. Consequently, a substantial proportion of multilocus genotypes held one or more errors, despite multiple PCR. These genotyping errors created several genotypes per individual and caused overestimation (up to 5.5-fold) of population size. We propose a 'matching approach' to eliminate this overestimation bias. PMID:12803649
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates.
Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma
2015-09-01
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196â€‰m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23â€‰m and reached 82â€‰m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100â€‰m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023
Effect of geocoding errors on traffic-related air pollutant exposure and concentration estimates
Ganguly, Rajiv; Batterman, Stuart; Isakov, Vlad; Snyder, Michelle; Breen, Michael; Brakefield-Caldwell, Wilma
2015-01-01
Exposure to traffic-related air pollutants is highest very near roads, and thus exposure estimates are sensitive to positional errors. This study evaluates positional and PM2.5 concentration errors that result from the use of automated geocoding methods and from linearized approximations of roads in link-based emission inventories. Two automated geocoders (Bing Map and ArcGIS) along with handheld GPS instruments were used to geocode 160 home locations of children enrolled in an air pollution study investigating effects of traffic-related pollutants in Detroit, Michigan. The average and maximum positional errors using the automated geocoders were 35 and 196 m, respectively. Comparing road edge and road centerline, differences in house-to-highway distances averaged 23 m and reached 82 m. These differences were attributable to road curvature, road width and the presence of ramps, factors that should be considered in proximity measures used either directly as an exposure metric or as inputs to dispersion or other models. Effects of positional errors for the 160 homes on PM2.5 concentrations resulting from traffic-related emissions were predicted using a detailed road network and the RLINE dispersion model. Concentration errors averaged only 9%, but maximum errors reached 54% for annual averages and 87% for maximum 24-h averages. Whereas most geocoding errors appear modest in magnitude, 5% to 20% of residences are expected to have positional errors exceeding 100 m. Such errors can substantially alter exposure estimates near roads because of the dramatic spatial gradients of traffic-related pollutant concentrations. To ensure the accuracy of exposure estimates for traffic-related air pollutants, especially near roads, confirmation of geocoordinates is recommended. PMID:25670023
Burr, T; Croft, S; Krieger, T; Martin, K; Norman, C; Walsh, S
2016-02-01
One example of top-down uncertainty quantification (UQ) involves comparing two or more measurements on each of multiple items. One example of bottom-up UQ expresses a measurement result as a function of one or more input variables that have associated errors, such as a measured count rate, which individually (or collectively) can be evaluated for impact on the uncertainty in the resulting measured value. In practice, it is often found that top-down UQ exhibits larger error variances than bottom-up UQ, because some error sources are present in the fielded assay methods used in top-down UQ that are not present (or not recognized) in the assay studies used in bottom-up UQ. One would like better consistency between the two approaches in order to claim understanding of the measurement process. The purpose of this paper is to refine bottom-up uncertainty estimation by using calibration information so that if there are no unknown error sources, the refined bottom-up uncertainty estimate will agree with the top-down uncertainty estimate to within a specified tolerance. Then, in practice, if the top-down uncertainty estimate is larger than the refined bottom-up uncertainty estimate by more than the specified tolerance, there must be omitted sources of error beyond those predicted from calibration uncertainty. The paper develops a refined bottom-up uncertainty approach for four cases of simple linear calibration: (1) inverse regression with negligible error in predictors, (2) inverse regression with non-negligible error in predictors, (3) classical regression followed by inversion with negligible error in predictors, and (4) classical regression followed by inversion with non-negligible errors in predictors. Our illustrations are of general interest, but are drawn from our experience with nuclear material assay by non-destructive assay. The main example we use is gamma spectroscopy that applies the enrichment meter principle. Previous papers that ignore error in predictors have shown a tendency for inverse regression to have lower error variance than classical regression followed by inversion. This paper supports that tendency both with and without error in predictors. Also, the paper shows that calibration parameter estimates using error in predictor methods perform worse than without using error in predictor methods in the case of inverse regression, but perform better than without using error in predictor methods in the case of classical regression followed by inversion. Both inverse and classical regression involve the ratio of dependent random variables; therefore, the assumed error distribution(s) will matter in parameter estimation and in uncertainty calculations. Mainly for that reason, calibration using a single predictor is distinct from simple regression, and it has not been thoroughly treated in the literature, nor in the ISO Guide to the Expression of Uncertainty in Measurements (GUM). Our refined approach is based on simulation, because we illustrate that analytical approximations are not adequate when there are, for example, 10 or fewer calibration measurements, which is common in calibration applications, each consisting of measured responses from known quantities. PMID:26698221
NASA Astrophysics Data System (ADS)
Cha, Dong-Hyun; Lee, Dong-Kyou
2009-07-01
In this study, the systematic errors in regional climate simulation of 28-year summer monsoon over East Asia and the western North Pacific (WNP) and the impact of the spectral nudging technique (SNT) on the reduction of the systematic errors are investigated. The experiment in which the SNT is not applied (the CLT run) has large systematic errors in seasonal mean climatology such as overestimated precipitation, weakened subtropical high, and enhanced low-level southwesterly over the subtropical WNP, while in the experiment using the SNT (the SP run) considerably smaller systematic errors are resulted. In the CTL run, the systematic error of simulated precipitation over the ocean increases significantly after mid-June, since the CTL run cannot reproduce the principal intraseasonal variation of summer monsoon precipitation. The SP run can appropriately capture the spatial distribution as well as temporal variation of the principal empirical orthogonal function mode, and therefore, the systematic error over the ocean does not increase after mid-June. The systematic error of simulated precipitation over the subtropical WNP in the CTL run results from the unreasonable positive feedback between precipitation and surface latent heat flux induced by the warm sea surface temperature anomaly. Since the SNT plays a role in decreasing the positive feedback by improving monsoon circulations, the SP run can considerably reduce the systematic errors of simulated precipitation as well as atmospheric fields over the subtropical WNP region.
Maier, D; Marth, M; Honerkamp, J; Weese, J
1999-07-20
An important step in analyzing data from dynamic light scattering is estimating the relaxation time spectrum from the correlation time function. This estimation is frequently done by regularization methods. To obtain good results with this step, the statistical errors of the correlation time function must be taken into account [J. Phys. A 6, 1897 (1973)]. So far error models assuming independent statistical errors have been used in the estimation. We show that results for the relaxation time spectrum are better if correlation between statistical errors is taken into account. There are two possible ways to obtain the error sizes and their correlations. On the one hand, they can be calculated from the correlation time function by use of a model derived by Schätzel. On the other hand, they can be computed directly from the time series of the scattered light. Simulations demonstrate that the best results are obtained with the latter method. This method requires, however, storing the time series of the scattered light during the experiment. Therefore a modified experimental setup is needed. Nevertheless the simulations also show improvement in the resulting relaxation time spectra if the error model of Schätzel is used. This improvement is confirmed when a lattice with a bimodal sphere size distribution is applied to experimental data. PMID:18323954
Multilevel Error Estimation and Adaptive h-Refinement for Cartesian Meshes with Embedded Boundaries
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Kwak, Dochan (Technical Monitor)
2002-01-01
This paper presents the development of a mesh adaptation module for a multilevel Cartesian solver. While the module allows mesh refinement to be driven by a variety of different refinement parameters, a central feature in its design is the incorporation of a multilevel error estimator based upon direct estimates of the local truncation error using tau-extrapolation. This error indicator exploits the fact that in regions of uniform Cartesian mesh, the spatial operator is exactly the same on the fine and coarse grids, and local truncation error estimates can be constructed by evaluating the residual on the coarse grid of the restricted solution from the fine grid. A new strategy for adaptive h-refinement is also developed to prevent errors in smooth regions of the flow from being masked by shocks and other discontinuous features. For certain classes of error histograms, this strategy is optimal for achieving equidistribution of the refinement parameters on hierarchical meshes, and therefore ensures grid converged solutions will be achieved for appropriately chosen refinement parameters. The robustness and accuracy of the adaptation module is demonstrated using both simple model problems and complex three dimensional examples using meshes with from 10(exp 6), to 10(exp 7) cells.
Use of an OSSE to Evaluate Background Error Covariances Estimated by the 'NMC Method'
NASA Technical Reports Server (NTRS)
Errico, Ronald M.; Prive, Nikki C.; Gu, Wei
2014-01-01
The NMC method has proven utility for prescribing approximate background-error covariances required by variational data assimilation systems. Here, untunedNMCmethod estimates are compared with explicitly determined error covariances produced within an OSSE context by exploiting availability of the true simulated states. Such a comparison provides insights into what kind of rescaling is required to render the NMC method estimates usable. It is shown that rescaling of variances and directional correlation lengths depends greatly on both pressure and latitude. In particular, some scaling coefficients appropriate in the Tropics are the reciprocal of those in the Extratropics. Also, the degree of dynamic balance is grossly overestimated by the NMC method. These results agree with previous examinations of the NMC method which used ensembles as an alternative for estimating background-error statistics.
Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J; Boerwinkle, Eric
2010-01-01
It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate theta = 4N(e)micro, population exponential growth rate R, and error rate epsilon, simultaneously. Using simulation, we show the combined effects of the parameters, theta, n, epsilon, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of theta with other theta estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140
NASA Astrophysics Data System (ADS)
Yamamoto, Nobito; Genma, Kenta
2007-02-01
Numerical verification methods, so-called Nakao's methods, on existence or uniqueness of solutions to PDEs have been developed by Nakao and his group including the authors. They are based on the error estimation of approximate solutions which are mainly computed by FEM. It is a standard way of the error estimation of FEM to estimate the projection errors by elementwise interpolation errors. There are some constants in the error estimation, which depend on the mesh size parameters h. The explicit values of the constants are necessary in order to use Nakao's method. However, there were not so many researches for the computation of the explicit values of the constants. Then we had to develop the computation by ourselves, especially with guaranteed accuracy. Note that the methods of the computation depend on the dimension, the degree of bases, and the shape of the domain, etc. The present paper shows how we have developed the methods to calculate the constants and describes new results for nonconvex domains.
Stenroos, Matti; Hauk, Olaf
2013-11-01
The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG+EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG+EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG+EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve?
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Christodoulou, Christos George (University of New Mexico, Albuquerque, NM); Abdallah, Chaouki T. (University of New Mexico, Albuquerque, NM); Rohwer, Judd Andrew
2003-02-01
The paper presents a multiclass, multilabel implementation of least squares support vector machines (LS-SVM) for direction of arrival (DOA) estimation in a CDMA system. For any estimation or classification system, the algorithm's capabilities and performance must be evaluated. Specifically, for classification algorithms, a high confidence level must exist along with a technique to tag misclassifications automatically. The presented learning algorithm includes error control and validation steps for generating statistics on the multiclass evaluation path and the signal subspace dimension. The error statistics provide a confidence level for the classification accuracy.
NASA Technical Reports Server (NTRS)
Consiglio, Maria C.; Hoadley, Sherwood T.; Allen, B. Danette
2009-01-01
Wind prediction errors are known to affect the performance of automated air traffic management tools that rely on aircraft trajectory predictions. In particular, automated separation assurance tools, planned as part of the NextGen concept of operations, must be designed to account and compensate for the impact of wind prediction errors and other system uncertainties. In this paper we describe a high fidelity batch simulation study designed to estimate the separation distance required to compensate for the effects of wind-prediction errors throughout increasing traffic density on an airborne separation assistance system. These experimental runs are part of the Safety Performance of Airborne Separation experiment suite that examines the safety implications of prediction errors and system uncertainties on airborne separation assurance systems. In this experiment, wind-prediction errors were varied between zero and forty knots while traffic density was increased several times current traffic levels. In order to accurately measure the full unmitigated impact of wind-prediction errors, no uncertainty buffers were added to the separation minima. The goal of the study was to measure the impact of wind-prediction errors in order to estimate the additional separation buffers necessary to preserve separation and to provide a baseline for future analyses. Buffer estimations from this study will be used and verified in upcoming safety evaluation experiments under similar simulation conditions. Results suggest that the strategic airborne separation functions exercised in this experiment can sustain wind prediction errors up to 40kts at current day air traffic density with no additional separation distance buffer and at eight times the current day with no more than a 60% increase in separation distance buffer.
Assumption-free estimation of the genetic contribution to refractive error across childhood
St Pourcain, Beate; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Williams, Cathy
2015-01-01
Purpose Studies in relatives have generally yielded high heritability estimates for refractive error: twins 75â€“90%, families 15â€“70%. However, because related individuals often share a common environment, these estimates are inflated (via misallocation of unique/common environment variance). We calculated a lower-bound heritability estimate for refractive error free from such bias. Methods Between the ages 7 and 15 years, participants in the Avon Longitudinal Study of Parents and Children (ALSPAC) underwent non-cycloplegic autorefraction at regular research clinics. At each age, an estimate of the variance in refractive error explained by single nucleotide polymorphism (SNP) genetic variants was calculated using genome-wide complex trait analysis (GCTA) using high-density genome-wide SNP genotype information (minimum N at each age=3,404). Results The variance in refractive error explained by the SNPs (â€œSNP heritabilityâ€) was stable over childhood: Across age 7â€“15 years, SNP heritability averaged 0.28 (SE=0.08, p<0.001). The genetic correlation for refractive error between visits varied from 0.77 to 1.00 (all p<0.001) demonstrating that a common set of SNPs was responsible for the genetic contribution to refractive error across this period of childhood. Simulations suggested lack of cycloplegia during autorefraction led to a small underestimation of SNP heritability (adjusted SNP heritability=0.35; SE=0.09). To put these results in context, the variance in refractive error explained (or predicted) by the time participants spent outdoors was <0.005 and by the time spent reading was <0.01, based on a parental questionnaire completed when the child was aged 8â€“9 years old. Conclusions Genetic variation captured by common SNPs explained approximately 35% of the variation in refractive error between unrelated subjects. This value sets an upper limit for predicting refractive error using existing SNP genotyping arrays, although higher-density genotyping in larger samples and inclusion of interaction effects is expected to raise this figure toward twin- and family-based heritability estimates. The same SNPs influenced refractive error across much of childhood. Notwithstanding the strong evidence of association between time outdoors and myopia, and time reading and myopia, less than 1% of the variance in myopia at age 15 was explained by crude measures of these two risk factors, indicating that their effects may be limited, at least when averaged over the whole population. PMID:26019481
2010-01-01
Background Estimates of divergence dates between species improve our understanding of processes ranging from nucleotide substitution to speciation. Such estimates are frequently based on molecular genetic differences between species; therefore, they rely on accurate estimates of the number of such differences (i.e. substitutions per site, measured as branch length on phylogenies). We used simulations to determine the effects of dataset size, branch length heterogeneity, branch depth, and analytical framework on branch length estimation across a range of branch lengths. We then reanalyzed an empirical dataset for plethodontid salamanders to determine how inaccurate branch length estimation can affect estimates of divergence dates. Results The accuracy of branch length estimation varied with branch length, dataset size (both number of taxa and sites), branch length heterogeneity, branch depth, dataset complexity, and analytical framework. For simple phylogenies analyzed in a Bayesian framework, branches were increasingly underestimated as branch length increased; in a maximum likelihood framework, longer branch lengths were somewhat overestimated. Longer datasets improved estimates in both frameworks; however, when the number of taxa was increased, estimation accuracy for deeper branches was less than for tip branches. Increasing the complexity of the dataset produced more misestimated branches in a Bayesian framework; however, in an ML framework, more branches were estimated more accurately. Using ML branch length estimates to re-estimate plethodontid salamander divergence dates generally resulted in an increase in the estimated age of older nodes and a decrease in the estimated age of younger nodes. Conclusions Branch lengths are misestimated in both statistical frameworks for simulations of simple datasets. However, for complex datasets, length estimates are quite accurate in ML (even for short datasets), whereas few branches are estimated accurately in a Bayesian framework. Our reanalysis of empirical data demonstrates the magnitude of effects of Bayesian branch length misestimation on divergence date estimates. Because the length of branches for empirical datasets can be estimated most reliably in an ML framework when branches are <1 substitution/site and datasets are â‰¥1 kb, we suggest that divergence date estimates using datasets, branch lengths, and/or analytical techniques that fall outside of these parameters should be interpreted with caution. PMID:20064267
Estimation of the minimum mRNA splicing error rate in vertebrates.
Skandalis, A
2016-01-01
The majority of protein coding genes in vertebrates contain several introns that are removed by the mRNA splicing machinery. Errors during splicing can generate aberrant transcripts and degrade the transmission of genetic information thus contributing to genomic instability and disease. However, estimating the error rate of constitutive splicing is complicated by the process of alternative splicing which can generate multiple alternative transcripts per locus and is particularly active in humans. In order to estimate the error frequency of constitutive mRNA splicing and avoid bias by alternative splicing we have characterized the frequency of splice variants at three loci, HPRT, POLB, and TRPV1 in multiple tissues of six vertebrate species. Our analysis revealed that the frequency of splice variants varied widely among loci, tissues, and species. However, the lowest observed frequency is quite constant among loci and approximately 0.1% aberrant transcripts per intron. Arguably this reflects the "irreducible" error rate of splicing, which consists primarily of the combination of replication errors by RNA polymerase II in splice consensus sequences and spliceosome errors in correctly pairing exons. PMID:26811995
Estimation and testing of higher-order spatial autoregressive panel data error component models
NASA Astrophysics Data System (ADS)
Badinger, Harald; Egger, Peter
2013-10-01
This paper develops an estimator for higher-order spatial autoregressive panel data error component models with spatial autoregressive disturbances, SARAR( R, S). We derive the moment conditions and optimal weighting matrix without distributional assumptions for a generalized moments (GM) estimation procedure of the spatial autoregressive parameters of the disturbance process and define a generalized two-stage least squares estimator for the regression parameters of the model. We prove consistency of the proposed estimators, derive their joint asymptotic distribution, and provide Monte Carlo evidence on their small sample performance.
Santamaria, L.; Ohme, F.; Dorband, N.; Moesta, P.; Robinson, E. L.; Krishnan, B.; Ajith, P.; Bruegmann, B.; Hannam, M.; Husa, S.; Pollney, D.; Reisswig, C.; Seiler, J.
2010-09-15
We present a new phenomenological gravitational waveform model for the inspiral and coalescence of nonprecessing spinning black hole binaries. Our approach is based on a frequency-domain matching of post-Newtonian inspiral waveforms with numerical relativity based binary black hole coalescence waveforms. We quantify the various possible sources of systematic errors that arise in matching post-Newtonian and numerical relativity waveforms, and we use a matching criteria based on minimizing these errors; we find that the dominant source of errors are those in the post-Newtonian waveforms near the merger. An analytical formula for the dominant mode of the gravitational radiation of nonprecessing black hole binaries is presented that captures the phenomenology of the hybrid waveforms. Its implementation in the current searches for gravitational waves should allow cross-checks of other inspiral-merger-ringdown waveform families and improve the reach of gravitational-wave searches.
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. C.; Alden, C.; White, J. W. C.
2014-10-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of C in the atmosphere, ocean, and land; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate error and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2 Ïƒ error of the atmospheric growth rate has decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s, leading to a ~20% reduction in the over-all uncertainty of net global C uptake by the biosphere. While fossil fuel emissions have increased by a factor of 4 over the last 5 decades, 2 Ïƒ errors in fossil fuel emissions due to national reporting errors and differences in energy reporting practices have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s. At the same time land use emissions have declined slightly over the last 5 decades, but their relative errors remain high. Notably, errors associated with fossil fuel emissions have come to dominate uncertainty in the global C budget and are now comparable to the total emissions from land use, thus efforts to reduce errors in fossil fuel emissions are necessary. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that C uptake has increased and 97% confident that C uptake by the terrestrial biosphere has increased over the last 5 decades. Although the persistence of future C sinks remains unknown and some ecosystem services may be compromised by this continued C uptake (e.g. ocean acidification), it is clear that arguably the greatest ecosystem service currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere.
Estimation of Smoothing Error in SBUV Profile and Total Ozone Retrieval
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, P. K.; Frith, S. M.; Fisher, B. L.; McPeters, R. D.; Taylor, S.; Labow, G. J.
2011-01-01
Data from the Nimbus-4, Nimbus-7 Solar Backscatter Ultra Violet (SBUV) and seven of the NOAA series of SBUV/2 instruments spanning 41 years are being reprocessed using V8.6 algorithm. The data are scheduled to be released by the end of August 2011. An important focus of the new algorithm is to estimate various sources of errors in the SBUV profiles and total ozone retrievals. We discuss here the smoothing errors that describe the components of the profile variability that the SBUV observing system can not measure. The SBUV(/2) instruments have a vertical resolution of 5 km in the middle stratosphere, decreasing to 8 to 10 km below the ozone peak and above 0.5 hPa. To estimate the smoothing effect of the SBUV algorithm, the actual statistics of the fine vertical structure of ozone profiles must be known. The covariance matrix of the ensemble of measured ozone profiles with the high vertical resolution would be a formal representation of the actual ozone variability. We merged the MLS (version 3) and sonde ozone profiles to calculate the covariance matrix, which in general case, for single profile retrieval, might be a function of the latitude and month. Using the averaging kernels of the SBUV(/2) measurements and calculated total covariance matrix one can estimate the smoothing errors for the SBUV ozone profiles. A method to estimate the smoothing effect of the SBUV algorithm is described and the covariance matrixes and averaging kernels are provided along with the SBUV(/2) ozone profiles. The magnitude of the smoothing error varies with altitude, latitude, season and solar zenith angle. The analysis of the smoothing errors, based on the SBUV(/2) monthly zonal mean time series, shows that the largest smoothing errors were detected in the troposphere and might be as large as 15-20% and rapidly decrease with the altitude. In the stratosphere above 40 hPa the smoothing errors are less than 5% and between 10 and 1 hPa the smoothing errors are on the order of 1%. We validate our estimated smoothing errors by comparing the SBUV ozone profiles with other ozone profiling sensors.
Estimation of errors in diffraction data measured by CCD area detectors
Waterman, David; Evans, Gwyndaf
2010-01-01
Current methods for diffraction-spot integration from CCD area detectors typically underestimate the errors in the measured intensities. In an attempt to understand fully and identify correctly the sources of all contributions to these errors, a simulation of a CCD-based area-detector module has been produced to address the problem of correct handling of data from such detectors. Using this simulation, it has been shown how, and by how much, measurement errors are underestimated. A model of the detector statistics is presented and an adapted summation integration routine that takes this into account is shown to result in more realistic error estimates. In addition, the effect of correlations between pixels on two-dimensional profile fitting is demonstrated and the problems surrounding improvements to profile-fitting algorithms are discussed. In practice, this requires knowledge of the expected correlation between pixels in the image.
Accounting for uncertainty in systematic bias in exposure estimates used in relative risk regression
Gilbert, E.S.
1995-12-01
In many epidemiologic studies addressing exposure-response relationships, sources of error that lead to systematic bias in exposure measurements are known to be present, but there is uncertainty in the magnitude and nature of the bias. Two approaches that allow this uncertainty to be reflected in confidence limits and other statistical inferences were developed, and are applicable to both cohort and case-control studies. The first approach is based on a numerical approximation to the likelihood ratio statistic, and the second uses computer simulations based on the score statistic. These approaches were applied to data from a cohort study of workers at the Hanford site (1944-86) exposed occupationally to external radiation; to combined data on workers exposed at Hanford, Oak Ridge National Laboratory, and Rocky Flats Weapons plant; and to artificial data sets created to examine the effects of varying sample size and the magnitude of the risk estimate. For the worker data, sampling uncertainty dominated and accounting for uncertainty in systematic bias did not greatly modify confidence limits. However, with increased sample size, accounting for these uncertainties became more important, and is recommended when there is interest in comparing or combining results from different studies.
Error estimates of triangular finite elements under a weak angle condition
NASA Astrophysics Data System (ADS)
Mao, Shipeng; Shi, Zhongci
2009-08-01
In this note, by analyzing the interpolation operator of Girault and Raviart given in [V. Girault, P.A. Raviart, Finite element methods for Navier-Stokes equations, Theory and algorithms, in: Springer Series in Computational Mathematics, Springer-Verlag, Berlin,1986] over triangular meshes, we prove optimal interpolation error estimates for Lagrange triangular finite elements of arbitrary order under the maximal angle condition in a unified and simple way. The key estimate is only an application of the Bramble-Hilbert lemma.
NASA Astrophysics Data System (ADS)
McCarthy, Sean C.; Gould, Richard W., Jr.; Richman, James; Kearney, Courtney; Lawson, Adam
2011-11-01
We examine the impact of incorrect atmospheric correction, specifically incorrect aerosol model selection, on retrieval of bio-optical properties from satellite ocean color imagery. Uncertainties in retrievals of bio-optical properties (such as chlorophyll, absorption and backscattering coefficients) from satellite ocean color imagery are related to a variety of factors, including errors associated with sensor calibration, atmospheric correction, and the bio-optical inversion algorithms. In many cases, selection of an inappropriate or erroneous aerosol model during atmospheric correction can dominate the errors in the satellite estimation of the normalized water-leaving radiances (nLw), especially over turbid, coastal waters. These errors affect the downstream bio-optical properties. Here, we focus on only the impact of incorrect aerosol model selection on the nLw radiance estimates, through comparisons between Moderate- Resolution Imaging Spectroradiometer (MODIS) satellite data and in situ measurements from AERONET-OC (Aerosol Robotic NETwork - Ocean Color) sampling platforms.
Burnecki, Krzysztof; Kepten, Eldad; Garini, Yuval; Sikora, Grzegorz; Weron, Aleksander
2015-01-01
Accurately characterizing the anomalous diffusion of a tracer particle has become a central issue in biophysics. However, measurement errors raise difficulty in the characterization of single trajectories, which is usually performed through the time-averaged mean square displacement (TAMSD). In this paper, we study a fractionally integrated moving average (FIMA) process as an appropriate model for anomalous diffusion data with measurement errors. We compare FIMA and traditional TAMSD estimators for the anomalous diffusion exponent. The ability of the FIMA framework to characterize dynamics in a wide range of anomalous exponents and noise levels through the simulation of a toy model (fractional Brownian motion disturbed by Gaussian white noise) is discussed. Comparison to the TAMSD technique, shows that FIMA estimation is superior in many scenarios. This is expected to enable new measurement regimes for single particle tracking (SPT) experiments even in the presence of high measurement errors. PMID:26065707
Nuclear power plant fault-diagnosis using neural networks with error estimation
Kim, K.; Bartlett, E.B.
1994-12-31
The assurance of the diagnosis obtained from a nuclear power plant (NPP) fault-diagnostic advisor based on artificial neural networks (ANNs) is essential for the practical implementation of the advisor to fault detection and identification. The objectives of this study are to develop an error estimation technique (EET) for diagnosis validation and apply it to the NPP fault-diagnostic advisor. Diagnosis validation is realized by estimating error bounds on the advisor`s diagnoses. The 22 transients obtained from the Duane Arnold Energy Center (DAEC) training simulator are used for this research. The results show that the NPP fault-diagnostic advisor are effective at producing proper diagnoses on which errors are assessed for validation and verification purposes.
Estimation of flood warning runoff thresholds in ungauged basins with asymmetric error functions
NASA Astrophysics Data System (ADS)
Toth, E.
2015-06-01
In many real-world flood forecasting systems, the runoff thresholds for activating warnings or mitigation measures correspond to the flow peaks with a given return period (often the 2-year one, that may be associated with the bankfull discharge). At locations where the historical streamflow records are absent or very limited, the threshold can be estimated with regionally-derived empirical relationships between catchment descriptors and the desired flood quantile. Whatever is the function form, such models are generally parameterised by minimising the mean square error, that assigns equal importance to overprediction or underprediction errors. Considering that the consequences of an overestimated warning threshold (leading to the risk of missing alarms) generally have a much lower level of acceptance than those of an underestimated threshold (leading to the issuance of false alarms), the present work proposes to parameterise the regression model through an asymmetric error function, that penalises more the overpredictions. The estimates by models (feedforward neural networks) with increasing degree of asymmetry are compared with those of a traditional, symmetrically-trained network, in a rigorous cross-validation experiment referred to a database of catchments covering the Italian country. The analysis shows that the use of the asymmetric error function can substantially reduce the number and extent of overestimation errors, if compared to the use of the traditional square errors. Of course such reduction is at the expense of increasing underestimation errors, but the overall accurateness is still acceptable and the results illustrate the potential value of choosing an asymmetric error function when the consequences of missed alarms are more severe than those of false alarms.
ERIC Educational Resources Information Center
Bond, William Glenn
2012-01-01
In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…
ERIC Educational Resources Information Center
Bond, William Glenn
2012-01-01
In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locatesâ€¦
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…
Error estimation for reconstruction of neuronal spike firing from fast calcium imaging
Liu, Xiuli; Lv, Xiaohua; Quan, Tingwei; Zeng, Shaoqun
2015-01-01
Calcium imaging is becoming an increasingly popular technology to indirectly measure activity patterns in local neuronal networks. Calcium transients reflect neuronal spike patterns allowing for spike train reconstructed from calcium traces. The key to judging spiking train authenticity is error estimation. However, due to the lack of an appropriate mathematical model to adequately describe this spike-calcium relationship, little attention has been paid to quantifying error ranges of the reconstructed spike results. By turning attention to the data characteristics close to the reconstruction rather than to a complex mathematic model, we have provided an error estimation method for the reconstructed neuronal spiking from calcium imaging. Real false-negative and false-positive rates of 10 experimental Ca2+ traces were within the estimated error ranges and confirmed that this evaluation method was effective. Estimation performance of the reconstruction of spikes from calcium transients within a neuronal population demonstrated a reasonable evaluation of the reconstructed spikes without having real electrical signals. These results suggest that our method might be valuable for the quantification of research based on reconstructed neuronal activity, such as to affirm communication between different neurons. PMID:25780733
Estimation of chromatic errors from broadband images for high contrast imaging
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Belikov, Ruslan
2015-09-01
Usage of an internal coronagraph with an adaptive optical system for wavefront correction for direct imaging of exoplanets is currently being considered for many mission concepts, including as an instrument addition to the WFIRST-AFTA mission to follow the James Web Space Telescope. The main technical challenge associated with direct imaging of exoplanets with an internal coronagraph is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, wavefront errors are usually estimated using probes on the DM. To date, most broadband lab demonstrations use narrowband filters to estimate the chromaticity of the wavefront error, but this reduces the photon flux per filter and requires a filter system. Here, we propose a method to estimate the chromaticity of wavefront errors using only a broadband image. This is achieved by using special DM probes that have sufficient chromatic diversity. As a case example, we simulate the retrieval of the spectrum of the central wavelength from broadband images for a simple shaped- pupil coronagraph with a conjugate DM and compute the resulting estimation error.
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
Interval Estimation for True Raw and Scale Scores under the Binomial Error Model
ERIC Educational Resources Information Center
Lee, Won-Chan; Brennan, Robert L.; Kolen, Michael J.
2006-01-01
Assuming errors of measurement are distributed binomially, this article reviews various procedures for constructing an interval for an individual's true number-correct score; presents two general interval estimation procedures for an individual's true scale score (i.e., normal approximation and endpoints conversion methods); compares various…
A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series
ERIC Educational Resources Information Center
Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.
2011-01-01
Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend aâ€¦
Mapping the Origins of Time: Scalar Errors in Infant Time Estimation
ERIC Educational Resources Information Center
Addyman, Caspar; Rocha, Sinead; Mareschal, Denis
2014-01-01
Time is central to any understanding of the world. In adults, estimation errors grow linearly with the length of the interval, much faster than would be expected of a clock-like mechanism. Here we present the first direct demonstration that this is also true in human infants. Using an eye-tracking paradigm, we examined 4-, 6-, 10-, and…
Jones, Reese E; Mandadapu, Kranthi K
2012-04-21
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)] and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently. PMID:22519310
NASA Astrophysics Data System (ADS)
Li, Guofa; Huang, Wei; Zheng, Hao; Zhang, Baoqing
2016-02-01
The spectral ratio method (SRM) is widely used to estimate quality factor Q via the linear regression of seismic attenuation under the assumption of a constant Q. However, the estimate error will be introduced when this assumption is violated. For the frequency-dependent Q described by a power-law function, we derived the analytical expression of estimate error as a function of the power-law exponent Î³ and the ratio of the bandwidth to the central frequency Ïƒ . Based on the theoretical analysis, we found that the estimate errors are mainly dominated by the exponent Î³ , and less affected by the ratio Ïƒ . This phenomenon implies that the accuracy of the Q estimate can hardly be improved by adjusting the width and range of the frequency band. Hence, we proposed a two-parameter regression method to estimate the frequency-dependent Q from the nonlinear seismic attenuation. The proposed method was tested using the direct waves acquired by a near-surface cross-hole survey, and its reliability was evaluated in comparison with the result of SRM.
NASA Astrophysics Data System (ADS)
Jones, Reese E.; Mandadapu, Kranthi K.
2012-04-01
We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.
Estimating the annotation error rate of curated GO database sequence annotations
Jones, Craig E; Brown, Alfred L; Baumann, Ute
2007-01-01
Background Annotations that describe the function of sequences are enormously important to researchers during laboratory investigations and when making computational inferences. However, there has been little investigation into the data quality of sequence function annotations. Here we have developed a new method of estimating the error rate of curated sequence annotations, and applied this to the Gene Ontology (GO) sequence database (GOSeqLite). This method involved artificially adding errors to sequence annotations at known rates, and used regression to model the impact on the precision of annotations based on BLAST matched sequences. Results We estimated the error rate of curated GO sequence annotations in the GOSeqLite database (March 2006) at between 28% and 30%. Annotations made without use of sequence similarity based methods (non-ISS) had an estimated error rate of between 13% and 18%. Annotations made with the use of sequence similarity methodology (ISS) had an estimated error rate of 49%. Conclusion While the overall error rate is reasonably low, it would be prudent to treat all ISS annotations with caution. Electronic annotators that use ISS annotations as the basis of predictions are likely to have higher false prediction rates, and for this reason designers of these systems should consider avoiding ISS annotations where possible. Electronic annotators that use ISS annotations to make predictions should be viewed sceptically. We recommend that curators thoroughly review ISS annotations before accepting them as valid. Overall, users of curated sequence annotations from the GO database should feel assured that they are using a comparatively high quality source of information. PMID:17519041
Audit of the global carbon budget: estimate errors and their impact on uptake uncertainty
NASA Astrophysics Data System (ADS)
Ballantyne, A. P.; Andres, R.; Houghton, R.; Stocker, B. D.; Wanninkhof, R.; Anderegg, W.; Cooper, L. A.; DeGrandpre, M.; Tans, P. P.; Miller, J. B.; Alden, C.; White, J. W. C.
2015-04-01
Over the last 5 decades monitoring systems have been developed to detect changes in the accumulation of carbon (C) in the atmosphere and ocean; however, our ability to detect changes in the behavior of the global C cycle is still hindered by measurement and estimate errors. Here we present a rigorous and flexible framework for assessing the temporal and spatial components of estimate errors and their impact on uncertainty in net C uptake by the biosphere. We present a novel approach for incorporating temporally correlated random error into the error structure of emission estimates. Based on this approach, we conclude that the 2Ïƒ uncertainties of the atmospheric growth rate have decreased from 1.2 Pg C yr-1 in the 1960s to 0.3 Pg C yr-1 in the 2000s due to an expansion of the atmospheric observation network. The 2Ïƒ uncertainties in fossil fuel emissions have increased from 0.3 Pg C yr-1 in the 1960s to almost 1.0 Pg C yr-1 during the 2000s due to differences in national reporting errors and differences in energy inventories. Lastly, while land use emissions have remained fairly constant, their errors still remain high and thus their global C uptake uncertainty is not trivial. Currently, the absolute errors in fossil fuel emissions rival the total emissions from land use, highlighting the extent to which fossil fuels dominate the global C budget. Because errors in the atmospheric growth rate have decreased faster than errors in total emissions have increased, a ~20% reduction in the overall uncertainty of net C global uptake has occurred. Given all the major sources of error in the global C budget that we could identify, we are 93% confident that terrestrial C uptake has increased and 97% confident that ocean C uptake has increased over the last 5 decades. Thus, it is clear that arguably one of the most vital ecosystem services currently provided by the biosphere is the continued removal of approximately half of atmospheric CO2 emissions from the atmosphere, although there are certain environmental costs associated with this service, such as the acidification of ocean waters.
NASA Technical Reports Server (NTRS)
Chhikara, R. S.; Feiveson, A. H. (principal investigator)
1979-01-01
Aggregation formulas are given for production estimation of a crop type for a zone, a region, and a country, and methods for estimating yield prediction errors for the three areas are described. A procedure is included for obtaining a combined yield prediction and its mean-squared error estimate for a mixed wheat pseudozone.
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2013-12-11
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ?200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. PMID:24334284
Error Analysis of Estimated Means and Horizontal Gradients of Scalar Variables
NASA Astrophysics Data System (ADS)
Nakamura, R.; Mahrt, L.
2004-12-01
While random sampling errors (RSE) for eddy-correlation fluxes are discussed in the literature, little attention has been paid to RSE for the mean or gradient of scalar variables. Accurate estimates of the mean and the gradient of certain scalar variables are important in evaluating and balancing budgets for these variables. In the present work, we evaluate the RSE for the estimated mean and horizontal gradient of air temperature under various atmospheric stabilities, using data from three field programs. Although air temperature is chosen as an economical scalar variable, our approach is applicable for error analysis of estimated advection and local budgets of CO2. Similarities are expected between the results of the error analysis for air temperature and those for CO2. For all atmospheric stabilities, significant energy occurs at mesoscale frequencies in the spectra of air temperature, which corresponds to significant non-stationarity of air temperature. On the other hand, little energy is present at mesoscale frequencies in the spectra of the horizontal gradient of air temperature, except for highly stable conditions. Low-frequency mesoscale fluctuations result in nonstationary records of horizontal gradient of air temperature, leading to large RSE in the estimated gradient. The nonstationarity effect is found to increase with increasing separation distance between the two air temperature measurements. The evaluated RSE of the horizontal gradient of air temperature is compared to the instrumentation-related uncertainties and the magnitude of the estimated horizontal gradients. An optimum separation distance between two points for air temperature measurements is discussed.
Sensitivity of Satellite Rainfall Estimates Using a Multidimensional Error Stochastic Model
NASA Astrophysics Data System (ADS)
Falck, A. S.; Vila, D. A.; Tomasella, J.
2011-12-01
Error propagation models of satellite precipitation fields are a key element in the response and performance of hydrological models, which depend on the reliability and availability of rainfall data. However, most of these models treat the error as an unidimensional measurement, with no consideration of the type of process involved. The limitations of unidimensional error propagation models were overcome by multidimensional error propagation stochastic models. In this study, the SREM2D (A Two-Dimensional Satellite Rainfall Error Model) was used to simulate satellite precipitation fields by inverse calibration parameters based on real data called "reference", in this case, gauge rainfall data. Sensitivity of satellite rainfall estimates from different satellite-based algorithms were investigated to be used for hydrologic simulation over the Tocantins basin, a transition area between the Amazon basin and the relative drier northeast region, using the SREM2D error propagation model. Preliminary results show that SREM2D has the potential to generate realistic ensembles of satellite rain fields to feed hydrologic models. Ongoing research is focused on the impact of rainfall ensembles simulated by SREM2D for the hydrologic modeling using the Model of Large Basin of the National Institute For Space Research (MGB-INPE) developed for Brazilian basins.
Estimated Cost Savings from Reducing Errors in the Preparation of Sterile Doses of Medications
Schneider, Philip J.
2014-01-01
Abstract Background: Preventing intravenous (IV) preparation errors will improve patient safety and reduce costs by an unknown amount. Objective: To estimate the financial benefit of robotic preparation of sterile medication doses compared to traditional manual preparation techniques. Methods: A probability pathway model based on published rates of errors in the preparation of sterile doses of medications was developed. Literature reports of adverse events were used to project the array of medical outcomes that might result from these errors. These parameters were used as inputs to a customized simulation model that generated a distribution of possible outcomes, their probability, and associated costs. Results: By varying the important parameters across ranges found in published studies, the simulation model produced a range of outcomes for all likely possibilities. Thus it provided a reliable projection of the errors avoided and the cost savings of an automated sterile preparation technology. The average of 1,000 simulations resulted in the prevention of 5,420 medication errors and associated savings of $288,350 per year. The simulation results can be narrowed to specific scenarios by fixing model parameters that are known and allowing the unknown parameters to range across values found in previously published studies. Conclusions: The use of a robotic device can reduce health care costs by preventing errors that can cause adverse drug events. PMID:25477598
Kunin, Victor; Engelbrektson, Anna; Ochman, Howard; Hugenholtz, Philip
2009-08-01
Massively parallel pyrosequencing of the small subunit (16S) ribosomal RNA gene has revealed that the extent of rare microbial populations in several environments, the 'rare biosphere', is orders of magnitude higher than previously thought. One important caveat with this method is that sequencing error could artificially inflate diversity estimates. Although the per-base error of 16S rDNA amplicon pyrosequencing has been shown to be as good as or lower than Sanger sequencing, no direct assessments of pyrosequencing errors on diversity estimates have been reported. Using only Escherichia coli MG1655 as a reference template, we find that 16S rDNA diversity is grossly overestimated unless relatively stringent read quality filtering and low clustering thresholds are applied. In particular, the common practice of removing reads with unresolved bases and anomalous read lengths is insufficient to ensure accurate estimates of microbial diversity. Furthermore, common and reproducible homopolymer length errors can result in relatively abundant spurious phylotypes further confounding data interpretation. We suggest that stringent quality-based trimming of 16S pyrotags and clustering thresholds no greater than 97% identity should be used to avoid overestimates of the rare biosphere.
Estimates of Mode-S EHS aircraft derived wind observation errors using triple collocation
NASA Astrophysics Data System (ADS)
de Haan, S.
2015-12-01
Information on the accuracy of meteorological observation is essential to assess the applicability of the measurement. In general, accuracy information is difficult to obtain in operational situations, since the truth is unknown. One method to determine this accuracy is by comparison with model equivalent of the observation. The advantage of this method is that all measured parameters can be evaluated, from two meter temperature observation to satellite radiances. The drawback is that these comparisons contain also the (unknown) model error. By applying the so-called triple collocation method (Stoffelen, 1998), on two independent observation at the same location in space and time, combined with model output, and assuming uncorrelated observations, the three error variances can be estimated. This method is applied in this study to estimate wind observation errors from aircraft, obtained using Mode-S EHS (de Haan, 2011). Radial wind measurements from Doppler weather Radar and wind vector measurements from Sodar, together with equivalents from a non-hydrostatic numerical weather prediction model are used to assess the accuracy of the Mode-S EHS wind observations. The Mode-S EHS wind observation error is estimated to be less than 1.4 Â± 0.1 m s-1 near the surface and around 1.1 Â± 0.3 m s-1 at 500 hPa.
Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.
Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang
2015-01-01
The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726
A Lag-1 Smoother Approach to System Error Estimation: Sequential Method
NASA Technical Reports Server (NTRS)
Todling, Ricardo
2014-01-01
Starting from sequential data assimilation arguments, the present work shows how to use residual statistics from filtering and lag-1 (6-hour) smoothing to infer components of the system (model) error covariance matrix that project onto a dense observing network. The residuals relationships involving the system error covariance matrix are similar to those available to derive background, observation, and analysis error covariance information from filter residual statistics. An illustration of the approach is given for two low-dimensional dynamical systems: a linear damped harmonic oscillator and the nonlinear Lorenz (1995). The application examples consider the important case of evaluating the ability to estimate the model error covariance from residual time series obtained from suboptimal filters and smoothers that assume the model to be perfect. The examples show the residuals to contain the necessary information to allow for such estimation. The examples also illustrate the consequences of estimating covariances through time series of residuals (available in practice) instead of multiple realizations from Monte Carlo sampling. Recast of the sequential approach into the language of variational language appears in a companion article.
Estimating Pole/Zero Errors in GSN-IU Network Calibration Metadata
NASA Astrophysics Data System (ADS)
Ringler, A. T.; Hutt, C. R.; Bolton, H. F.; Storm, T.; Gee, L. S.
2010-12-01
Converting the voltage output of a seismometer into ground motion requires correction of the data using a description of the instrumentâ€™s response. For the Global Seismographic Network (GSN), as well as many other networks, this instrument response is represented as a Laplace pole/zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. (Many GSN stations are operated by IRIS and USGS with network code â€œIUâ€.) This Laplace representation assumes that the seismometer behaves as a perfectly linear system, with temporal changes described adequately through multiple epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We developed an iterative three-step method to estimate instrument response model parameters (poles, zeros, and sensitivity and normalization parameters) and their associated errors using random calibration signals. First, we solve a coarse non-linear inverse problem using a least squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records. Second, we solve a non-linear parameter estimation problem by an iterative method to obtain the least squares best-fit Laplace pole/zero model. Third, by applying the central limit theorem we estimate the errors in this pole/zero model by solving the inverse problem at each frequency in a 2/3rds-octave band centered at each best-fit pole/zero frequency. This procedure yields error estimates of the >99% confidence interval. We demonstrate this method by applying it to a number of recent IU network calibration records.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain poleâ€“zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace poleâ€“zeroâ€“gain model. Third, by applying the central limit theorem, we estimate the errors in this poleâ€“zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit poleâ€“zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
A variational method for finite element stress recovery and error estimation
NASA Technical Reports Server (NTRS)
Tessler, A.; Riggs, H. R.; Macy, S. C.
1993-01-01
A variational method for obtaining smoothed stresses from a finite element derived nonsmooth stress field is presented. The method is based on minimizing a functional involving discrete least-squares error plus a penalty constraint that ensures smoothness of the stress field. An equivalent accuracy criterion is developed for the smoothing analysis which results in a C sup 1-continuous smoothed stress field possessing the same order of accuracy as that found at the superconvergent optimal stress points of the original finite element analysis. Application of the smoothing analysis to residual error estimation is also demonstrated.
Doherty, Carole; Stavropoulou, Charitini
2012-07-01
This systematic review identifies the factors that both support and deter patients from being willing and able to participate actively in reducing clinical errors. Specifically, we add to our understanding of the safety culture in healthcare by engaging with the call for more focus on the relational and subjective factors which enable patients' participation (Iedema, Jorm, & Lum, 2009; Ovretveit, 2009). A systematic search of six databases, ten journals and seven healthcare organisations' web sites resulted in the identification of 2714 studies of which 68 were included in the review. These studies investigated initiatives involving patients in safety or studies of patients' perspectives of being actively involved in the safety of their care. The factors explored varied considerably depending on the scope, setting and context of the study. Using thematic analysis we synthesized the data to build an explanation of why, when and how patients are likely to engage actively in helping to reduce clinical errors. The findings show that the main factors for engaging patients in their own safety can be summarised in four categories: illness; individual cognitive characteristics; the clinician-patient relationship; and organisational factors. We conclude that illness and patients' perceptions of their role and status as subordinate to that of clinicians are the most important barriers to their involvement in error reduction. In sum, patients' fear of being labelled "difficult" and a consequent desire for clinicians' approbation may cause them to assume a passive role as a means of actively protecting their personal safety. PMID:22541799
Analysis of systematic errors in lateral shearing interferometry for EUV optical testing
Miyakawa, Ryan; Naulleau, Patrick; Goldberg, Kenneth A.
2009-02-24
Lateral shearing interferometry (LSI) provides a simple means for characterizing the aberrations in optical systems at EUV wavelengths. In LSI, the test wavefront is incident on a low-frequency grating which causes the resulting diffracted orders to interfere on the CCD. Due to its simple experimental setup and high photon efficiency, LSI is an attractive alternative to point diffraction interferometry and other methods that require spatially filtering the wavefront through small pinholes which notoriously suffer from low contrast fringes and improper alignment. In order to demonstrate that LSI can be accurate and robust enough to meet industry standards, analytic models are presented to study the effects of unwanted grating and detector tilt on the system aberrations, and a method for identifying and correcting for these errors in alignment is proposed. The models are subsequently verified by numerical simulation. Finally, an analysis is performed of how errors in the identification and correction of grating and detector misalignment propagate to errors in fringe analysis.
Systematic study of error sources in supersonic skin-friction balance measurements
NASA Technical Reports Server (NTRS)
Allen, J. M.
1976-01-01
An experimental study was performed to investigate potential error sources in data obtained with a self-nulling, moment-measuring, skin-friction balance. The balance was installed in the sidewall of a supersonic wind tunnel, and independent measurements of the three forces contributing to the balance output (skin friction, lip force, and off-center normal force) were made for a range of gap size and element protrusion. The relatively good agreement between the balance data and the sum of these three independently measured forces validated the three-term model used. No advantage to a small gap size was found; in fact, the larger gaps were preferable. Perfect element alignment with the surrounding test surface resulted in very small balance errors. However, if small protrusion errors are unavoidable, no advantage was found in having the element slightly below the surrounding test surface rather than above it.
Estimates of ocean forecast error covariance derived from Hessian Singular Vectors
NASA Astrophysics Data System (ADS)
Smith, Kevin D.; Moore, Andrew M.; Arango, Hernan G.
2015-05-01
Experience in numerical weather prediction suggests that singular value decomposition (SVD) of a forecast can yield useful a priori information about the growth of forecast errors. It has been shown formally that SVD using the inverse of the expected analysis error covariance matrix to define the norm at initial time yields the Empirical Orthogonal Functions (EOFs) of the forecast error covariance matrix at the final time. Because of their connection to the 2nd derivative of the cost function in 4-dimensional variational (4D-Var) data assimilation, the initial time singular vectors defined in this way are often referred to as the Hessian Singular Vectors (HSVs). In the present study, estimates of ocean forecast errors and forecast error covariance were computed using SVD applied to a baroclinically unstable temperature front in a re-entrant channel using the Regional Ocean Modeling System (ROMS). An identical twin approach was used in which a truth run of the model was sampled to generate synthetic hydrographic observations that were then assimilated into the same model started from an incorrect initial condition using 4D-Var. The 4D-Var system was run sequentially, and forecasts were initialized from each ocean analysis. SVD was performed on the resulting forecasts to compute the HSVs and corresponding EOFs of the expected forecast error covariance matrix. In this study, a reduced rank approximation of the inverse expected analysis error covariance matrix was used to compute the HSVs and EOFs based on the Lanczos vectors computed during the 4D-Var minimization of the cost function. This has the advantage that the entire spectrum of HSVs and EOFs in the reduced space can be computed. The associated singular value spectrum is found to yield consistent and reliable estimates of forecast error variance in the space spanned by the EOFs. In addition, at long forecast lead times the resulting HSVs and companion EOFs are able to capture many features of the actual realized forecast error at the largest scales. Forecast error growth via the HSVs was found to be significantly influenced by the non-normal character of the underlying forecast circulation, and is accompanied by a forward energy cascade, suggesting that forecast errors could be effectively controlled by reducing the error at the largest scales in the forecast initial conditions. A predictive relation for the amplitude of the basin integrated forecast error in terms of the mean aspect ratio of the forecast error hyperellipse (quantified in terms of the mean eccentricity) was also identified which could prove useful for predicting the level of forecast error a priori. All of these findings were found to be insensitive to the configuration of the 4D-Var data assimilation system and the resolution of the observing network.
NASA Astrophysics Data System (ADS)
Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.
2015-10-01
In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.
NASA Astrophysics Data System (ADS)
Lins, R. M.; Ferreira, M. D. C.; Proença, S. P. B.; Duarte, C. A.
2015-12-01
In this study, a recovery-based a-posteriori error estimator originally proposed for the Corrected XFEM is investigated in the framework of the stable generalized FEM (SGFEM). Both Heaviside and branch functions are adopted to enrich the approximations in the SGFEM. Some necessary adjustments to adapt the expressions defining the enhanced stresses in the original error estimator are discussed in the SGFEM framework. Relevant aspects such as effectivity indexes, error distribution, convergence rates and accuracy of the recovered stresses are used in order to highlight the main findings and the effectiveness of the error estimator. Two benchmark problems of the 2-D fracture mechanics are selected to assess the robustness of the error estimator hereby investigated. The main findings of this investigation are: the SGFEM shows higher accuracy than G/XFEM and a reduced sensitivity to blending element issues. The error estimator can accurately capture these features of both methods.
Mass load estimation errors utilizing grab sampling strategies in a karst watershed
Fogle, A.W.; Taraba, J.L.; Dinger, J.S.
2003-01-01
Developing a mass load estimation method appropriate for a given stream and constituent is difficult due to inconsistencies in hydrologic and constituent characteristics. The difficulty may be increased in flashy flow conditions such as karst. Many projects undertaken are constrained by budget and manpower and do not have the luxury of sophisticated sampling strategies. The objectives of this study were to: (1) examine two grab sampling strategies with varying sampling intervals and determine the error in mass load estimates, and (2) determine the error that can be expected when a grab sample is collected at a time of day when the diurnal variation is most divergent from the daily mean. Results show grab sampling with continuous flow to be a viable data collection method for estimating mass load in the study watershed. Comparing weekly, biweekly, and monthly grab sampling, monthly sampling produces the best results with this method. However, the time of day the sample is collected is important. Failure to account for diurnal variability when collecting a grab sample may produce unacceptable error in mass load estimates. The best time to collect a sample is when the diurnal cycle is nearest the daily mean.
DTI quality control assessment via error estimation from Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Farzinfar, Mahshid; Li, Yin; Verde, Audrey R.; Oguz, Ipek; Gerig, Guido; Styner, Martin A.
2013-03-01
Diffusion Tensor Imaging (DTI) is currently the state of the art method for characterizing the microscopic tissue structure of white matter in normal or diseased brain in vivo. DTI is estimated from a series of Diffusion Weighted Imaging (DWI) volumes. DWIs suffer from a number of artifacts which mandate stringent Quality Control (QC) schemes to eliminate lower quality images for optimal tensor estimation. Conventionally, QC procedures exclude artifact-affected DWIs from subsequent computations leading to a cleaned, reduced set of DWIs, called DWI-QC. Often, a rejection threshold is heuristically/empirically chosen above which the entire DWI-QC data is rendered unacceptable and thus no DTI is computed. In this work, we have devised a more sophisticated, Monte-Carlo (MC) simulation based method for the assessment of resulting tensor properties. This allows for a consistent, error-based threshold definition in order to reject/accept the DWI-QC data. Specifically, we propose the estimation of two error metrics related to directional distribution bias of Fractional Anisotropy (FA) and the Principal Direction (PD). The bias is modeled from the DWI-QC gradient information and a Rician noise model incorporating the loss of signal due to the DWI exclusions. Our simulations further show that the estimated bias can be substantially different with respect to magnitude and directional distribution depending on the degree of spatial clustering of the excluded DWIs. Thus, determination of diffusion properties with minimal error requires an evenly distributed sampling of the gradient directions before and after QC.
Estimation of chromatic errors from broadband images for high contrast imaging: sensitivity analysis
NASA Astrophysics Data System (ADS)
Sirbu, Dan; Belikov, Ruslan
2016-01-01
Many concepts have been proposed to enable direct imaging of planets around nearby stars, and which would enable spectroscopic observations of their atmospheric observations and the potential discovery of biomarkers. The main technical challenge associated with direct imaging of exoplanets is to effectively control both the diffraction and scattered light from the star so that the dim planetary companion can be seen. Usage of an internal coronagraph with an adaptive optical system for wavefront correction is one of the most mature methods and is being developed as an instrument addition to the WFIRST-AFTA space mission. In addition, such instruments as GPI and SPHERE are already being used on the ground and are yielding spectra of giant planets. For the deformable mirror (DM) to recover a dark hole region with sufficiently high contrast in the image plane, mid-spatial frequency wavefront errors must be estimated. To date, most broadband lab demonstrations use narrowband filters to obtain an estimate of the the chromaticity of the wavefront error and this can result in usage of a large percentage of the total integration time. Previously, we have proposed a method to estimate the chromaticity of wavefront errors using only broadband images; we have demonstrated that under idealized conditions wavefront errors can be estimated from images composed of discrete wavelengths. This is achieved by using DM probes with sufficient spatially-localized chromatic diversity. Here we report on the results of a study of the performance of this method with respect to realistic broadband images including noise. Additionally, we study optimal probe patterns that enable reduction of the number of probes used and compare the integration time with narrowband and IFS estimation methods.
Real-Time Baseline Error Estimation and Correction for GNSS/Strong Motion Seismometer Integration
NASA Astrophysics Data System (ADS)
Li, C. Y. N.; Groves, P. D.; Ziebart, M. K.
2014-12-01
Accurate and rapid estimation of permanent surface displacement is required immediately after a slip event for earthquake monitoring or tsunami early warning. It is difficult to achieve the necessary accuracy and precision at high- and low-frequencies using GNSS or seismometry alone. GNSS and seismic sensors can be integrated to overcome the limitations of each. Kalman filter algorithms with displacement and velocity states have been developed to combine GNSS and accelerometer observations to obtain the optimal displacement solutions. However, the sawtooth-like phenomena caused by the bias or tilting of the sensor decrease the accuracy of the displacement estimates. A three-dimensional Kalman filter algorithm with an additional baseline error state has been developed. An experiment with both a GNSS receiver and a strong motion seismometer mounted on a movable platform and subjected to known displacements was carried out. The results clearly show that the additional baseline error state enables the Kalman filter to estimate the instrument's sensor bias and tilt effects and correct the state estimates in real time. Furthermore, the proposed Kalman filter algorithm has been validated with data sets from the 2010 Mw 7.2 El Mayor-Cucapah Earthquake. The results indicate that the additional baseline error state can not only eliminate the linear and quadratic drifts but also reduce the sawtooth-like effects from the displacement solutions. The conventional zero-mean baseline-corrected results cannot show the permanent displacements after an earthquake; the two-state Kalman filter can only provide stable and optimal solutions if the strong motion seismometer had not been moved or tilted by the earthquake. Yet the proposed Kalman filter can achieve the precise and accurate displacements by estimating and correcting for the baseline error at each epoch. The integration filters out noise-like distortions and thus improves the real-time detection and measurement capability. The system will return precise and accurate displacements at a high rate for real-time earthquake monitoring.
Error-tolerant sign retrieval using visual features and maximum a posteriori estimation.
Wu, Chung-Hsien; Chiu, Yu-Hsien; Cheng, Kung-Wei
2004-04-01
This paper proposes an efficient error-tolerant approach to retrieving sign words from a Taiwanese Sign Language (TSL) database. This database is tagged with visual gesture features and organized as a multilist code tree. These features are defined in terms of the visual characteristics of sign gestures by which they are indexed for sign retrieval and displayed using an anthropomorphic interface. The maximum a posteriori estimation is exploited to retrieve the most likely sign word given the input feature sequence. An error-tolerant mechanism based on mutual information criterion is proposed to retrieve a sign word of interest efficiently and robustly. A user-friendly anthropomorphic interface is also developed to assist learning TSL. Several experiments were performed in an educational environment to investigate the system's retrieval accuracy. Our proposed approach outperformed a dynamic programming algorithm in its task and shows tolerance to user input errors. PMID:15382653
Error estimation for moment analysis in heavy-ion collision experiment
NASA Astrophysics Data System (ADS)
Luo, Xiaofeng
2012-02-01
Higher moments of conserved quantities are predicted to be sensitive to the correlation length and connected to the thermodynamic susceptibility. Thus, higher moments of net-baryon, net-charge and net-strangeness have been extensively studied theoretically and experimentally to explore phase structure and bulk properties of QCD matters created in a heavy-ion collision experiment. As the higher moment analysis is a statistic hungry study, the error estimation is crucial to extract physics information from the limited experimental data. In this paper, we will derive the limit distributions and error formula based on the delta theorem in statistics for various order moments used in the experimental data analysis. The Monte Carlo simulation is also applied to test the error formula.
Evaluation of Temporal and Spatial Distribution of Error in Modeled Evapotranspiration Estimates
NASA Astrophysics Data System (ADS)
Senarath, S. U.
2004-12-01
Evapotranspiration (ET) constitutes a significant portion of Florida's water budget, and is second only to rainfall. Consequently, accurate ET estimates are very important for hydrologic modeling work. However, in comparison to rainfall, relatively few ground stations exist for the measurement of this important model input. Consequently, ET estimates produced by models are often subject to error. Satellite-based ET estimates provide an unprecedented opportunity to measure actual ET in sparsely monitored watersheds. They also provide a basis for comparing errors in modeled actual ET estimates that are induced due to the following reasons: 1) spatial interpolation and data-filling methods; 2) inaccurate and sparse meteorological data; and, 3) simplified parameterization schemes. In this study, satellite-based daily actual ET estimates from the Water Conservation Area 3 (WCA-3) watershed in South Florida, USA, are compared with those obtained from a calibrated finite-volume regional hydrologic model for the 1998 and 1999 calendar years. The satellite-based ET estimates used in this study compared well with measured ground-based actual ET data. The WCA-3 watershed is an integral part of Florida's remnant Everglades, and covers an area of approximately 2,400 square kilometers. It is compartmentalized by several levees and road embankments, and drained by several major canals. It also serves as a major habitat for many wildlife species, a source for urban water supply and an emergency storage area for flood water. The WCA-3 is located east of the Big Cypress National Preserve, and north of the Everglades National Park. Despite its significance, WCA-3 has relatively few ET monitoring stations and meteorological stations. Consequently, it is ideally suited for evaluating and quantifying errors in simulated actual ET estimates. The Regional Simulation Model (RSM) developed by the South Florida Water Management District is used for the modeling of these ET estimates. The RSM is an implicit, finite-volume, continuous, distributed, integrated surface/ground-water model, capable of simulating one-dimensional canal/stream flow and two-dimensional overland flow in arbitrarily shaped areas using a variable triangular mesh. The RSM has several options for modeling actual ET. An empirical parameterization scheme that is dependent on land-cover, water-depth and potential ET is used in this study for estimating actual ET. The parameter-sensitivities of this scheme are investigated and analyzed for several predominant land-cover classes, and dry- and wet-soil conditions. The RSM is calibrated and verified using historical time-series data from 1988 to 1995, and 1996 to 2000, respectively. All sensitivity and error analyses are conducted using estimates from the verification period.
Forest canopy height estimation using ICESat/GLAS data and error factor analysis in Hokkaido, Japan
NASA Astrophysics Data System (ADS)
Hayashi, Masato; Saigusa, Nobuko; Oguma, Hiroyuki; Yamagata, Yoshiki
2013-07-01
Spaceborne light detection and ranging (LiDAR) enables us to obtain information about vertical forest structure directly, and it has often been used to measure forest canopy height or above-ground biomass. However, little attention has been given to comparisons of the accuracy of the different estimation methods of canopy height or to the evaluation of the error factors in canopy height estimation. In this study, we tested three methods of estimating canopy height using the Geoscience Laser Altimeter System (GLAS) onboard NASA's Ice, Cloud, and land Elevation Satellite (ICESat), and evaluated several factors that affected accuracy. Our study areas were Tomakomai and Kushiro, two forested areas on Hokkaido in Japan. The accuracy of the canopy height estimates was verified by ground-based measurements. We also conducted a multivariate analysis using quantification theory type I (multiple-regression analysis of qualitative data) and identified the observation conditions that had a large influence on estimation accuracy. The method using the digital elevation model was the most accurate, with a root-mean-square error (RMSE) of 3.2 m. However, GLAS data with a low signal-to-noise ratio (?10.0) and that taken from September to October 2009 had to be excluded from the analysis because the estimation accuracy of canopy height was remarkably low. After these data were excluded, the multivariate analysis showed that surface slope had the greatest effect on estimation accuracy, and the accuracy dropped the most in steeply sloped areas. We developed a second model with two equations to estimate canopy height depending on the surface slope, which improved estimation accuracy (RMSE = 2.8 m). These results should prove useful and provide practical suggestions for estimating forest canopy height using spaceborne LiDAR.
Calibration and systematic error analysis for the COBE(1) DMR 4year sky maps
Kogut, A.; Banday, A.J.; Bennett, C.L.; Gorski, K.M.; Hinshaw,G.; Jackson, P.D.; Keegstra, P.; Lineweaver, C.; Smoot, G.F.; Tenorio,L.; Wright, E.L.
1996-01-04
The Differential Microwave Radiometers (DMR) instrument aboard the Cosmic Background Explorer (COBE) has mapped the full microwave sky to mean sensitivity 26 mu K per 7 degrees held of view. The absolute calibration is determined to 0.7 percent with drifts smaller than 0.2 percent per year. We have analyzed both the raw differential data and the pixelized sky maps for evidence of contaminating sources such as solar system foregrounds, instrumental susceptibilities, and artifacts from data recovery and processing. Most systematic effects couple only weakly to the sky maps. The largest uncertainties in the maps result from the instrument susceptibility to Earth's magnetic field, microwave emission from Earth, and upper limits to potential effects at the spacecraft spin period. Systematic effects in the maps are small compared to either the noise or the celestial signal: the 95 percent confidence upper limit for the pixel-pixel rms from all identified systematics is less than 6 mu K in the worst channel. A power spectrum analysis of the (A-B)/2 difference maps shows no evidence for additional undetected systematic effects.
Systematic Errors in Stereo PIV When Imaging through a Glass Window
NASA Technical Reports Server (NTRS)
Green, Richard; McAlister, Kenneth W.
2004-01-01
This document assesses the magnitude of velocity measurement errors that may arise when performing stereo particle image velocimetry (PIV) with cameras viewing through thick, refractive window and where the calibration is performed in one plane only. The effect of the window is to introduce a refractive error that increases with window thickness and the camera angle of incidence. The calibration should be performed while viewing through the test section window, otherwise a potentially significant error may be introduced that affects each velocity component differently. However, even when the calibration is performed correctly, another error may arise during the stereo reconstruction if the perspective angle determined for each camera does not account for the displacement of the light rays as they refract through the thick window. Care should be exercised when applying in a single-plane calibration since certain implicit assumptions may in fact require conditions that are extremely difficult to meet in a practical laboratory environment. It is suggested that the effort expended to ensure this accuracy may be better expended in performing a more lengthy volumetric calibration procedure, which does not rely upon the assumptions implicit in the single plane method and avoids the need for the perspective angle to be calculated.
Working with the Brain, Not against It: Correction of Systematic Errors in Subtraction.
ERIC Educational Resources Information Center
Baxter, Paul; Dole, Shelly
1990-01-01
An experimental study was conducted of 2 different approaches to the correction of consistent subtraction errors in 6 students aged 12-13. Tentative findings demonstrate the superiority of the old way/new way method compared to use of Multibase Arithmetic Blocks and place value charts. (JDD)
NASA Astrophysics Data System (ADS)
Biavati, G.; Feist, D. G.; Gerbig, C.; Kretschmer, R.
2015-10-01
The mixing height is a key parameter for many applications that relate surface-atmosphere exchange fluxes to atmospheric mixing ratios, e.g., in atmospheric transport modeling of pollutants. The mixing height can be estimated with various methods: profile measurements from radiosondes as well as remote sensing (e.g., optical backscatter measurements). For quantitative applications, it is important to estimate not only the mixing height itself but also the uncertainty associated with this estimate. However, classical error propagation typically fails on mixing height estimates that use thresholds in vertical profiles of some measured or measurement-derived quantity. Therefore, we propose a method to estimate the uncertainty of an estimation of the mixing height. The uncertainty we calculate is related not to the physics of the boundary layer (e.g., entrainment zone thickness) but to the quality of the analyzed signals. The method relies on the concept of statistical confidence and on the knowledge of the measurement errors. It can also be applied to problems outside atmospheric mixing height retrievals where properties have to be assigned to a specific position, e.g., the location of a local extreme.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
NASA Astrophysics Data System (ADS)
Evin, Guillaume; Thyer, Mark; Kavetski, Dmitri; McInerney, David; Kuczera, George
2014-03-01
The paper appraises two approaches for the treatment of heteroscedasticity and autocorrelation in residual errors of hydrological models. Both approaches use weighted least squares (WLS), with heteroscedasticity modeled as a linear function of predicted flows and autocorrelation represented using an AR(1) process. In the first approach, heteroscedasticity and autocorrelation parameters are inferred jointly with hydrological model parameters. The second approach is a two-stage "postprocessor" scheme, where Stage 1 infers the hydrological parameters ignoring autocorrelation and Stage 2 conditionally infers the heteroscedasticity and autocorrelation parameters. These approaches are compared to a WLS scheme that ignores autocorrelation. Empirical analysis is carried out using daily data from 12 US catchments from the MOPEX set using two conceptual rainfall-runoff models, GR4J, and HBV. Under synthetic conditions, the postprocessor and joint approaches provide similar predictive performance, though the postprocessor approach tends to underestimate parameter uncertainty. However, the MOPEX results indicate that the joint approach can be nonrobust. In particular, when applied to GR4J, it often produces poor predictions due to strong multiway interactions between a hydrological water balance parameter and the error model parameters. The postprocessor approach is more robust precisely because it ignores these interactions. Practical benefits of accounting for error autocorrelation are demonstrated by analyzing streamflow predictions aggregated to a monthly scale (where ignoring daily-scale error autocorrelation leads to significantly underestimated predictive uncertainty), and by analyzing one-day-ahead predictions (where accounting for the error autocorrelation produces clearly higher precision and better tracking of observed data). Including autocorrelation into the residual error model also significantly affects calibrated parameter values and uncertainty estimates. The paper concludes with a summary of outstanding challenges in residual error modeling, particularly in ephemeral catchments.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.
2013-07-23
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Welter-Schultes, Francisco; Görlich, Angela; Lutze, Alexandra
2016-01-01
This study is aimed to shed light on the reliability of Sherborn's Index Animalium in terms of modern usage. The AnimalBase project spent several years' worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherborn's work and verify the completeness and correctness of his record. We found the reliability of Sherborn's resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherborn's record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherborn's data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherborn's own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherborn's resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2-4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658
Welter-Schultes, Francisco; GÃ¶rlich, Angela; Lutze, Alexandra
2016-01-01
Abstract This study is aimed to shed light on the reliability of Sherbornâ€™s Index Animalium in terms of modern usage. The AnimalBase project spent several yearsâ€™ worth of teamwork dedicated to extracting new names from original sources in the period ranging from 1757 to the mid-1790s. This allowed us to closely analyse Sherbornâ€™s work and verify the completeness and correctness of his record. We found the reliability of Sherbornâ€™s resource generally very high, but in some special situations the reliability was reduced due to systematic errors or incompleteness in source material. Index Animalium is commonly used by taxonomists today who rely strongly on Sherbornâ€™s record; our study is directed most pointedly at those users. We recommend paying special attention to the situations where we found that Sherbornâ€™s data should be read with caution. In addition to some categories of systematic errors and mistakes that were Sherbornâ€™s own responsibility, readers should also take into account that nomenclatural rules have been changed or refined in the past 100 years, and that Sherbornâ€™s resource could eventually present outdated information. One of our main conclusions is that error rates in nomenclatoral compilations tend to be lower if one single and highly experienced person such as Sherborn carries out the work, than if a team is trying to do the task. Based on our experience with extracting names from original sources we came to the conclusion that error rates in such a manual work on names in a list are difficult to reduce below 2â€“4%. We suggest this is a natural limit and a point of diminishing returns for projects of this nature. PMID:26877658
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
A Novel Four-Node Quadrilateral Smoothing Element for Stress Enhancement and Error Estimation
NASA Technical Reports Server (NTRS)
Tessler, A.; Riggs, H. R.; Dambach, M.
1998-01-01
A four-node, quadrilateral smoothing element is developed based upon a penalized-discrete-least-squares variational formulation. The smoothing methodology recovers C1-continuous stresses, thus enabling effective a posteriori error estimation and automatic adaptive mesh refinement. The element formulation is originated with a five-node macro-element configuration consisting of four triangular anisoparametric smoothing elements in a cross-diagonal pattern. This element pattern enables a convenient closed-form solution for the degrees of freedom of the interior node, resulting from enforcing explicitly a set of natural edge-wise penalty constraints. The degree-of-freedom reduction scheme leads to a very efficient formulation of a four-node quadrilateral smoothing element without any compromise in robustness and accuracy of the smoothing analysis. The application examples include stress recovery and error estimation in adaptive mesh refinement solutions for an elasticity problem and an aerospace structural component.
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. PMID:26470273
NASA Astrophysics Data System (ADS)
Yu, Jaehyung; Wagner, Lucas K.; Ertekin, Elif
2015-12-01
The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.
Yu, Jaehyung; Wagner, Lucas K; Ertekin, Elif
2015-12-14
The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used. PMID:26671396
ERIC Educational Resources Information Center
Penfield, Randall D.
2007-01-01
The standard error of the maximum likelihood ability estimator is commonly estimated by evaluating the test information function at an examinee's current maximum likelihood estimate (a point estimate) of ability. Because the test information function evaluated at the point estimate may differ from the test information function evaluated at an…
Jakeman, J.D. Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
Filtering Error Estimates and Order of Accuracy via the Peano Kernel Theorem
Jerome Blair
2011-02-01
The Peano Kernel Theorem is introduced and a frequency domain derivation is given. It is demonstrated that the application of this theorem yields simple and accurate formulas for estimating the error introduced into a signal by filtering it to reduce noise. The concept of the order of accuracy of a filter is introduced and used as an organizing principle to compare the accuracy of different filters.
Estimate of procession and polar motion errors from planetary encounter station location solutions
NASA Technical Reports Server (NTRS)
Pease, G. E.
1978-01-01
Jet Propulsion Laboratory Deep Space Station (DSS) location solutions based on two JPL planetary ephemerides, DE 84 and DE 96, at eight planetary encounters were used to obtain weighted least squares estimates of precession and polar motion errors. The solution for precession error in right ascension yields a value of 0.3 X 10 to the minus 5 power plus or minus 0.8 X 10 to the minus 6 power deg/year. This maps to a right ascension error of 1.3 X 10 to the minus 5 power plus or minus 0.4 X 10 to the minus 5 power deg at the first Voyager 1979 Jupiter encounter if the current JPL DSS location set is used. Solutions for precession and polar motion using station locations based on DE 84 agree well with the solution using station locations referenced to DE 96. The precession solution removes the apparent drift in station longitude and spin axis distance estimates, while the encounter polar motion solutions consistently decrease the scatter in station spin axis distance estimates.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
NASA Astrophysics Data System (ADS)
Wellendorff, Jess; Lundgaard, Keld T.; Møgelhøj, Andreas; Petzold, Vivien; Landis, David D.; Nørskov, Jens K.; Bligaard, Thomas; Jacobsen, Karsten W.
2012-06-01
A methodology for semiempirical density functional optimization, using regularization and cross-validation methods from machine learning, is developed. We demonstrate that such methods enable well-behaved exchange-correlation approximations in very flexible model spaces, thus avoiding the overfitting found when standard least-squares methods are applied to high-order polynomial expansions. A general-purpose density functional for surface science and catalysis studies should accurately describe bond breaking and formation in chemistry, solid state physics, and surface chemistry, and should preferably also include van der Waals dispersion interactions. Such a functional necessarily compromises between describing fundamentally different types of interactions, making transferability of the density functional approximation a key issue. We investigate this trade-off between describing the energetics of intramolecular and intermolecular, bulk solid, and surface chemical bonding, and the developed optimization method explicitly handles making the compromise based on the directions in model space favored by different materials properties. The approach is applied to designing the Bayesian error estimation functional with van der Waals correlation (BEEF-vdW), a semilocal approximation with an additional nonlocal correlation term. Furthermore, an ensemble of functionals around BEEF-vdW comes out naturally, offering an estimate of the computational error. An extensive assessment on a range of data sets validates the applicability of BEEF-vdW to studies in chemistry and condensed matter physics. Applications of the approximation and its Bayesian ensemble error estimate to two intricate surface science problems support this.
NASA Technical Reports Server (NTRS)
Lu, Hui-Ling; Cheng, Victor H. L.; Leitner, Jesse A.; Carpenter, Kenneth G.
2004-01-01
Long-baseline space interferometers involving formation flying of multiple spacecraft hold great promise as future space missions for high-resolution imagery. The major challenge of obtaining high-quality interferometric synthesized images from long-baseline space interferometers is to control these spacecraft and their optics payloads in the specified configuration accurately. In this paper, we describe our effort toward fine control of long-baseline space interferometers without resorting to additional sensing equipment. We present an estimation procedure that effectively extracts relative x/y translational exit pupil aperture deviations from the raw interferometric image with small estimation errors.
Systematic reduction of sign errors in many-body calculations of atoms and molecules
Bajdich, Michal; Tiago, Murilo L; Hood, Randolph Q.; Kent, Paul R; Reboredo, Fernando A
2010-01-01
The self-healing diffusion Monte Carlo algorithm (SHDMC) [Phys. Rev. B {\\bf 79} 195117 (2009), {\\it ibid.} {\\bf 80} 125110 (2009)] is applied to the calculation of ground state states of atoms and molecules. By direct comparison with accurate configuration interaction results we show that applying the SHDMC method to the oxygen atom leads to systematic convergence towards the exact ground state wave function. We present results for the small but challenging N$_2$ molecule, where results obtained via the energy minimization method and SHDMC are within experimental accuracy of 0.08 eV. Moreover, we demonstrate that the algorithm is robust enough to be used for the calculations of systems at least as large as C$_{20}$ starting from a set of random coefficients. SHDMC thus constitutes a practical method for systematically reducing the fermion sign problem in electronic structure calculations.
GarcÃa-Donas, Julieta G; Dyke, Jeffrey; Paine, Robert R; Nathena, Despoina; Kranioti, Elena F
2016-02-01
Most age estimation methods are proven problematic when applied in highly fragmented skeletal remains. Rib histomorphometry is advantageous in such cases; yet it is vital to test and revise existing techniques particularly when used in legal settings (Crowder and Rosella, 2007). This study tested Stout & Paine (1992) and Stout etÂ al. (1994) histological age estimation methods on a Modern Greek sample using different sampling sites. Six left 4th ribs of known age and sex were selected from a modern skeletal collection. Each rib was cut into three equal segments. Two thin sections were acquired from each segment. A total of 36 thin sections were prepared and analysed. Four variables (cortical area, intact and fragmented osteon density and osteon population density) were calculated for each section and age was estimated according to Stout & Paine (1992) and Stout etÂ al. (1994). The results showed that both methods produced a systemic underestimation of the individuals (to a maximum of 43 years) although a general improvement in accuracy levels was observed when applying the Stout etÂ al. (1994) formula. There is an increase of error rates with increasing age with the oldest individual showing extreme differences between real age and estimated age. Comparison of the different sampling sites showed small differences between the estimated ages suggesting that any fragment of the rib could be used without introducing significant error. Yet, a larger sample should be used to confirm these results. PMID:26698389
Error analysis of leaf area estimates made from allometric regression models
NASA Technical Reports Server (NTRS)
Feiveson, A. H.; Chhikara, R. S.
1986-01-01
Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).
NASA Astrophysics Data System (ADS)
Chang, Fi-John; Chen, Pin-An; Liu, Chen-Wuing; Liao, Vivian Hsiu-Chuan; Liao, Chung-Min
2013-08-01
Arsenic (As) is an odorless semi-metal that occurs naturally in rock and soil, and As contamination in groundwater resources has become a serious threat to human health. Thus, assessing the spatial and temporal variability of As concentration is highly desirable, particularly in heavily As-contaminated areas. However, various difficulties may be encountered in the regional estimation of As concentration such as cost-intensive field monitoring, scarcity of field data, identification of important factors affecting As, over-fitting or poor estimation accuracy. This study develops a novel systematical dynamic-neural modeling (SDM) for effectively estimating regional As-contaminated water quality by using easily-measured water quality variables. To tackle the difficulties commonly encountered in regional estimation, the SDM comprises of a neural network and four statistical techniques: the Nonlinear Autoregressive with eXogenous input (NARX) network, Gamma test, cross-validation, Bayesian regularization method and indicator kriging (IK). For practical application, this study investigated a heavily As-contaminated area in Taiwan. The backpropagation neural network (BPNN) is adopted for comparison purpose. The results demonstrate that the NARX network (Root mean square error (RMSE): 95.11 ?g l-1 for training; 106.13 ?g l-1 for validation) outperforms the BPNN (RMSE: 121.54 ?g l-1 for training; 143.37 ?g l-1 for validation). The constructed SDM can provide reliable estimation (R2 > 0.89) of As concentration at ungauged sites based merely on three easily-measured water quality variables (Alk, Ca2+ and pH). In addition, risk maps under the threshold of the WHO drinking water standard (10 ?g l-1) are derived by the IK to visually display the spatial and temporal variation of the As concentration in the whole study area at different time spans. The proposed SDM can be practically applied with satisfaction to the regional estimation in study areas of interest and the estimation of missing, hazardous or costly data to facilitate water resources management.
NASA Astrophysics Data System (ADS)
Li, Y.; Ryu, D.; Western, A. W.; Wang, Q.; Robertson, D.; Crow, W. T.
2013-12-01
Timely and reliable streamflow forecasting with acceptable accuracy is fundamental for flood response and risk management. However, streamflow forecasting models are subject to uncertainties from inputs, state variables, model parameters and structures. This has led to an ongoing development of methods for uncertainty quantification (e.g. generalized likelihood and Bayesian approaches) and methods for uncertainty reduction (e.g. sequential and variational data assimilation approaches). These two classes of methods are distinct yet related, e.g., the validity of data assimilation is essentially determined by the reliability of error specification. Error specification has been one of the most challenging areas in hydrologic data assimilation and there is a major opportunity for implementing uncertainty quantification approaches to inform both model and observation uncertainties. In this study, ensemble data assimilation methods are combined with the maximum a posteriori (MAP) error estimation approach to construct an integrated error estimation and data assimilation scheme for operational streamflow forecasting. We contrast the performance of two different data assimilation schemes: a lag-aware ensemble Kalman smoother (EnKS) and the conventional ensemble Kalman filter (EnKF). The schemes are implemented for a catchment upstream of Myrtleford in the Ovens river basin, Australia to assimilate real-time discharge observations into a conceptual catchment model, modèle du Génie Rural à 4 paramètres Horaire (GR4H). The performance of the integrated system is evaluated in both a synthetic forecasting scenario with observed precipitation and an operational forecasting scenario with Numerical Weather Prediction (NWP) forecast rainfall. The results show that the error parameters estimated by the MAP approach generates a reliable spread of streamflow prediction. Continuous state updating reduces uncertainty in initial states and thereby improves the forecasting accuracy significantly. The EnKS streamflow forecasts are more accurate and reliable than the EnKF for the synthetic scenario. They also alleviate instability in the EnKF due to overcorrection of current state variables. For the operational forecasting case, the forecasts benefit less from state updating and the difference between the EnKS and EnKF becomes less significant. This is because the uncertainty in the NWP rainfall forecasts becomes dominant with increasing lead time. Forecast discharge in 2010. Solid curves are observations and gray areas indicate 95% of probabilistic forecasts. (a) openloop ensemble spread based on the error parameters estimated by the MAP; (b) 60-h lead time forecasts based on the EnKS.
Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti
2014-06-01
Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding. PMID:24692025
Heath, G.
2012-06-01
This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.
NASA Technical Reports Server (NTRS)
Sulkanen, Martin E.; Joy, M. K.; Patel, S. K.
1998-01-01
Imaging of the Sunyaev-Zei'dovich (S-Z) effect in galaxy clusters combined with the cluster plasma x-ray diagnostics can measure the cosmic distance scale to high accuracy. However, projecting the inverse-Compton scattering and x-ray emission along the cluster line-of-sight will introduce systematic errors in the Hubble constant, H$-O$, because the true shape of the cluster is not known. This effect remains present for clusters that are otherwise chosen to avoid complications for the S-Z and x-ray analysis, such as plasma temperature variations, cluster substructure, or cluster dynamical evolution. In this paper we present a study of the systematic errors in the value of H$-0$, as determined by the x-ray and S-Z properties of a theoretical sample of triaxial isothermal 'beta-model' clusters, caused by projection effects and observer orientation relative to the model clusters' principal axes. The model clusters are not generated as ellipsoids of rotation, but have three independent 'core radii', as well as a random orientation to the plane of the sky.
Climate change projection with reduced model systematic error over tropical Pacific
NASA Astrophysics Data System (ADS)
Keenlyside, Noel; Shen, Mao-Lin; Selten, Frank; Wiegerinck, Wim; Duane, Gregory
2015-04-01
The tropical Pacific is a major driver of the global climate system. Climate models, however, have difficulties in realistically simulating the region: typically, they have a too pronounced equatorial cold tongue, an erroneous double inter-tropical convergence zone (ITCZ), and poorly represent of ocean-atmosphere interaction. These errors introduce large uncertainties into climate change projections. Here we assess the impact of these errors by performing climate change projections with an interactive model ensemble (SUMO) that has a reduced tropical Pacific error. SUMO consists of one ocean model coupled to two atmospheric models, which only differ in their representation of atmospheric convection. Through optimal coupling weights, synchronization of the atmospheric models over tropical Pacific is enhanced and there the simulation of climate is dramatically improved: The model realistically simulates the equatorial cold tongue, a single ITCZ, and the Bjerknes positive feedback. SUMO also simulates interannual variability better than the two individual coupled ocean-atmosphere models (i.e., based on the two different atmospheric models). Global warming predicted by SUMO lies between that of the two individual coupled models. However, the projections for the tropical Pacific differ markedly. SUMO simulates a weakening of the zonal SST gradient, while the two individual coupled models simulate a strengthening. The weaker zonal SST gradient leads to around 20% weakening of the Walker Circulation, and there is an increase of precipitation over the entire tropics. In contrast, the two individual coupled models simulate an eastward shift of the Walker Circulation, and enhancement of precipitation over the western Pacific. The differences are related to the representation of ocean-atmosphere interaction. This underscores the importance of improving the simulation of tropical Pacific to reduce uncertainties in climate change projection.
Estimating and comparing microbial diversity in the presence of sequencing errors
Chiu, Chun-Huo
2016-01-01
Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measuresâ€™ emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q â‰¥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872
Estimating and comparing microbial diversity in the presence of sequencing errors.
Chiu, Chun-Huo; Chao, Anne
2016-01-01
Estimating and comparing microbial diversity are statistically challenging due to limited sampling and possible sequencing errors for low-frequency counts, producing spurious singletons. The inflated singleton count seriously affects statistical analysis and inferences about microbial diversity. Previous statistical approaches to tackle the sequencing errors generally require different parametric assumptions about the sampling model or about the functional form of frequency counts. Different parametric assumptions may lead to drastically different diversity estimates. We focus on nonparametric methods which are universally valid for all parametric assumptions and can be used to compare diversity across communities. We develop here a nonparametric estimator of the true singleton count to replace the spurious singleton count in all methods/approaches. Our estimator of the true singleton count is in terms of the frequency counts of doubletons, tripletons and quadrupletons, provided these three frequency counts are reliable. To quantify microbial alpha diversity for an individual community, we adopt the measure of Hill numbers (effective number of taxa) under a nonparametric framework. Hill numbers, parameterized by an order q that determines the measures' emphasis on rare or common species, include taxa richness (q = 0), Shannon diversity (q = 1, the exponential of Shannon entropy), and Simpson diversity (q = 2, the inverse of Simpson index). A diversity profile which depicts the Hill number as a function of order q conveys all information contained in a taxa abundance distribution. Based on the estimated singleton count and the original non-singleton frequency counts, two statistical approaches (non-asymptotic and asymptotic) are developed to compare microbial diversity for multiple communities. (1) A non-asymptotic approach refers to the comparison of estimated diversities of standardized samples with a common finite sample size or sample completeness. This approach aims to compare diversity estimates for equally-large or equally-complete samples; it is based on the seamless rarefaction and extrapolation sampling curves of Hill numbers, specifically for q = 0, 1 and 2. (2) An asymptotic approach refers to the comparison of the estimated asymptotic diversity profiles. That is, this approach compares the estimated profiles for complete samples or samples whose size tends to be sufficiently large. It is based on statistical estimation of the true Hill number of any order q â‰¥ 0. In the two approaches, replacing the spurious singleton count by our estimated count, we can greatly remove the positive biases associated with diversity estimates due to spurious singletons and also make fair comparisons across microbial communities, as illustrated in our simulation results and in applying our method to analyze sequencing data from viral metagenomes. PMID:26855872
Miller, David A.; Nichols, J.D.; McClintock, B.T.; Grant, E.H.C.; Bailey, L.L.; Weir, L.A.
2011-01-01
Efforts to draw inferences about species occurrence frequently account for false negatives, the common situation when individuals of a species are not detected even when a site is occupied. However, recent studies suggest the need to also deal with false positives, which occur when species are misidentified so that a species is recorded as detected when a site is unoccupied. Bias in estimators of occupancy, colonization, and extinction can be severe when false positives occur. Accordingly, we propose models that simultaneously account for both types of error. Our approach can be used to improve estimates of occupancy for study designs where a subset of detections is of a type or method for which false positives can be assumed to not occur. We illustrate properties of the estimators with simulations and data for three species of frogs. We show that models that account for possible misidentification have greater support (lower AIC for two species) and can yield substantially different occupancy estimates than those that do not. When the potential for misidentification exists, researchers should consider analytical techniques that can account for this source of error, such as those presented here. ?? 2011 by the Ecological Society of America..
NASA Technical Reports Server (NTRS)
Sparks, Lawrence
2013-01-01
Current satellite-based augmentation systems estimate ionospheric delay using algorithms that assume the electron density of the ionosphere is non-negligible only in a thin shell located near the peak of the actual profile. In its initial operating capability, for example, the Wide Area Augmentation System incorporated the thin shell model into an estimation algorithm that calculates vertical delay using a planar fit. Under disturbed conditions or at low latitude where ionospheric structure is complex, however, the thin shell approximation can serve as a significant source of estimation error. A recent upgrade of the system replaced the planar fit algorithm with an algorithm based upon kriging. The upgrade owes its success, in part, to the ability of kriging to mitigate the error due to this approximation. Previously, alternative delay estimation algorithms have been proposed that eliminate the need for invoking the thin shell model altogether. Prior analyses have compared the accuracy achieved by these methods to the accuracy achieved by the planar fit algorithm. This paper extends these analyses to include a comparison with the accuracy achieved by kriging. It concludes by examining how a satellite-based augmentation system might be implemented without recourse to the thin shell approximation.
NASA Astrophysics Data System (ADS)
Taki, Hirofumi; Yamakawa, Makoto; Shiina, Tsuyoshi; Sato, Toru
2015-07-01
High-accuracy ultrasound motion estimation has become an essential technique in blood flow imaging, elastography, and motion imaging of the heart wall. Speckle tracking has been one of the best motion estimators; however, conventional speckle-tracking methods neglect the effect of out-of-plane motion and deformation. Our proposed method assumes that the cross-correlation between a reference signal and a comparison signal depends on the spatio-temporal distance between the two signals. The proposed method uses the decrease in the cross-correlation value in a reference frame to compensate for the intrinsic error caused by out-of-plane motion and deformation without a priori information. The root-mean-square error of the estimated lateral tissue motion velocity calculated by the proposed method ranged from 6.4 to 34% of that using a conventional speckle-tracking method. This study demonstrates the high potential of the proposed method for improving the estimation of tissue motion using an ultrasound speckle-tracking method in medical diagnosis.
NASA Astrophysics Data System (ADS)
Mccoll, K. A.; Vogelzang, J.; Konings, A. G.; Entekhabi, D.; Piles, M.; Stoffelen, A.
2014-12-01
Calibration, validation and error-characterization of geophysical measurement systems typically requires knowledge of the "true" value of the target variable. However, the data considered to represent the "true" values often include their own measurement errors, biasing calibration and validation results. Triple collocation (TC) can be used to estimate the root-mean-square-error (RMSE), using observations from three mutually-independent, error-prone measurement systems. Here, we introduce Extended Triple Collocation (ETC): using exactly the same assumptions as TC, we derive an additional performance metric, the correlation coefficient of the measurement system with respect to the unknown target, R2. We demonstrate that R2 is the scaled, unbiased signal-to-noise ratio, and provides a complementary perspective compared to the RMSE. We apply it to three collocated wind datasets: the ECMWF numerical weather prediction forecast, ASCAT scatterometer retrievals and in-situ buoy measurements. Since ETC is as easy to implement as TC, requires no additional assumptions, and provides an extra performance metric, it may be of interest in a wide range of geophysical disciplines.
The curious anomaly of skewed judgment distributions and systematic error in the wisdom of crowds.
Nash, Ulrik W
2014-01-01
Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078
2013-01-01
Trigeminal autonomic cephalalgias (TACs) and hemicrania continua (HC) are relatively rare but clinically rather well-defined primary headaches. Despite the existence of clear-cut diagnostic criteria (The International Classification of Headache Disorders, 2nd edition - ICHD-II) and several therapeutic guidelines, errors in workup and treatment of these conditions are frequent in clinical practice. We set out to review all available published data on mismanagement of TACs and HC patients in order to understand and avoid its causes. The search strategy identified 22 published studies. The most frequent errors described in the management of patients with TACs and HC are: referral to wrong type of specialist, diagnostic delay, misdiagnosis, and the use of treatments without overt indication. Migraine with and without aura, trigeminal neuralgia, sinus infection, dental pain and temporomandibular dysfunction are the disorders most frequently overdiagnosed. Even when the clinical picture is clear-cut, TACs and HC are frequently not recognized and/or mistaken for other disorders, not only by general physicians, dentists and ENT surgeons, but also by neurologists and headache specialists. This seems to be due to limited knowledge of the specific characteristics and variants of these disorders, and it results in the unnecessary prescription of ineffective and sometimes invasive treatments which may have negative consequences for patients. Greater knowledge of and education about these disorders, among both primary care physicians and headache specialists, might contribute to improving the quality of life of TACs and HC patients. PMID:23565739
The Curious Anomaly of Skewed Judgment Distributions and Systematic Error in the Wisdom of Crowds
Nash, Ulrik W.
2014-01-01
Judgment distributions are often skewed and we know little about why. This paper explains the phenomenon of skewed judgment distributions by introducing the augmented quincunx (AQ) model of sequential and probabilistic cue categorization by neurons of judges. In the process of developing inferences about true values, when neurons categorize cues better than chance, and when the particular true value is extreme compared to what is typical and anchored upon, then populations of judges form skewed judgment distributions with high probability. Moreover, the collective error made by these people can be inferred from how skewed their judgment distributions are, and in what direction they tilt. This implies not just that judgment distributions are shaped by cues, but that judgment distributions are cues themselves for the wisdom of crowds. The AQ model also predicts that judgment variance correlates positively with collective error, thereby challenging what is commonly believed about how diversity and collective intelligence relate. Data from 3053 judgment surveys about US macroeconomic variables obtained from the Federal Reserve Bank of Philadelphia and the Wall Street Journal provide strong support, and implications are discussed with reference to three central ideas on collective intelligence, these being Galton's conjecture on the distribution of judgments, Muth's rational expectations hypothesis, and Page's diversity prediction theorem. PMID:25406078
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, F. J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 (Goddard Earth Model, 36x36 spherical harmonic field) were employed toward application of this technique for gravity field parameters. Also, GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized here. The method employs subset solutions of the data associated with the complete solution and uses an algorithm to adjust the data weights by requiring the differences of parameters between solutions to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting as compared to the nominal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1989-01-01
A new technique was developed for the weighting of data from satellite tracking systems in order to obtain an optimum least-squares solution and an error calibration for the solution parameters. Data sets from optical, electronic, and laser systems on 17 satellites in GEM-T1 Goddard Earth Model-T1 (GEM-T1) were employed toward application of this technique for gravity field parameters. Also GEM-T2 (31 satellites) was recently computed as a direct application of the method and is summarized. The method employs subset solutions of the data associated with the complete solution to agree with their error estimates. With the adjusted weights the process provides for an automatic calibration of the error estimates for the solution parameters. The data weights derived are generally much smaller than corresponding weights obtained from nominal values of observation accuracy or residuals. Independent tests show significant improvement for solutions with optimal weighting. The technique is general and may be applied to orbit parameters, station coordinates, or other parameters than the gravity model.
Estimating Root Mean Square Errors in Remotely Sensed Soil Moisture over Continental Scale Domains
NASA Technical Reports Server (NTRS)
Draper, Clara S.; Reichle, Rolf; de Jeu, Richard; Naeimi, Vahid; Parinussa, Robert; Wagner, Wolfgang
2013-01-01
Root Mean Square Errors (RMSE) in the soil moisture anomaly time series obtained from the Advanced Scatterometer (ASCAT) and the Advanced Microwave Scanning Radiometer (AMSR-E; using the Land Parameter Retrieval Model) are estimated over a continental scale domain centered on North America, using two methods: triple colocation (RMSETC ) and error propagation through the soil moisture retrieval models (RMSEEP ). In the absence of an established consensus for the climatology of soil moisture over large domains, presenting a RMSE in soil moisture units requires that it be specified relative to a selected reference data set. To avoid the complications that arise from the use of a reference, the RMSE is presented as a fraction of the time series standard deviation (fRMSE). For both sensors, the fRMSETC and fRMSEEP show similar spatial patterns of relatively highlow errors, and the mean fRMSE for each land cover class is consistent with expectations. Triple colocation is also shown to be surprisingly robust to representativity differences between the soil moisture data sets used, and it is believed to accurately estimate the fRMSE in the remotely sensed soil moisture anomaly time series. Comparing the ASCAT and AMSR-E fRMSETC shows that both data sets have very similar accuracy across a range of land cover classes, although the AMSR-E accuracy is more directly related to vegetation cover. In general, both data sets have good skill up to moderate vegetation conditions.
NASA Astrophysics Data System (ADS)
Shi, Lei; Wang, Z. J.
2015-08-01
Adjoint-based mesh adaptive methods are capable of distributing computational resources to areas which are important for predicting an engineering output. In this paper, we develop an adjoint-based h-adaptation approach based on the high-order correction procedure via reconstruction formulation (CPR) to minimize the output or functional error. A dual-consistent CPR formulation of hyperbolic conservation laws is developed and its dual consistency is analyzed. Super-convergent functional and error estimate for the output with the CPR method are obtained. Factors affecting the dual consistency, such as the solution point distribution, correction functions, boundary conditions and the discretization approach for the non-linear flux divergence term, are studied. The presented method is then used to perform simulations for the 2D Euler and Navier-Stokes equations with mesh adaptation driven by the adjoint-based error estimate. Several numerical examples demonstrate the ability of the presented method to dramatically reduce the computational cost comparing with uniform grid refinement.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermoreÂ Â» we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.Â«Â less
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.
NASA Technical Reports Server (NTRS)
Kalton, G.
1983-01-01
A number of surveys were conducted to study the relationship between the level of aircraft or traffic noise exposure experienced by people living in a particular area and their annoyance with it. These surveys generally employ a clustered sample design which affects the precision of the survey estimates. Regression analysis of annoyance on noise measures and other variables is often an important component of the survey analysis. Formulae are presented for estimating the standard errors of regression coefficients and ratio of regression coefficients that are applicable with a two- or three-stage clustered sample design. Using a simple cost function, they also determine the optimum allocation of the sample across the stages of the sample design for the estimation of a regression coefficient.
NASA Technical Reports Server (NTRS)
Mobasseri, B. G.; Mcgillem, C. D.; Anuta, P. E. (Principal Investigator)
1978-01-01
The author has identified the following significant results. The probability of correct classification of various populations in data was defined as the primary performance index. The multispectral data being of multiclass nature as well, required a Bayes error estimation procedure that was dependent on a set of class statistics alone. The classification error was expressed in terms of an N dimensional integral, where N was the dimensionality of the feature space. The multispectral scanner spatial model was represented by a linear shift, invariant multiple, port system where the N spectral bands comprised the input processes. The scanner characteristic function, the relationship governing the transformation of the input spatial, and hence, spectral correlation matrices through the systems, was developed.
Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J.; Song, Xubo
2014-01-01
Purpose: Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. Methods: The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Results: Experiments with simulated datasets, images of an ex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors’ method. Simulated and real cardiac sequences tests showed that results in the authors’ method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors’ method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors’ method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. Conclusions: The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors’ method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods. PMID:24784402
NASA Astrophysics Data System (ADS)
Del Giudice, Dario; Reichert, Peter; Honti, Mark; Scheidegger, Andreas; Albert, Carlo; Rieckermann, JÃ¶rg
2013-04-01
Predictions of the urban hydrologic response are of paramount importance to foresee floodings and sewer overflows and hence support sensible decision making. Due to several error sources models results are uncertain. Modeling statistically these uncertainties we can estimate how reliable predictions are. Most hydological studies in urban areas (e.g. Freni and Mannina, 2010) assume that residuals E are independent and identically distributed. These hypotheses are usually strongly violated due to neglected deficits in model structure and error in input data that lead to strong autocorrelation. We propose a new methodology to i) estimating the total uncertainty and ii) quantifying different type of errors affecting model results, videlicet, parametric, structural, input data, and calibration data uncertainty. Thereby we can make more realistic assumptions about the residuals. We consider the residual process to be a sum of an autocorrelated error term B and a memory-less uncertainty term E. As proposed by Reichert and Schuwirth (2012), B, called model inadequacy or bias, is described by a normally-distributed autoregressive process and accounts for structural deficiencies and errors in input measurement. The observation error E, is, instead, normally and independently distributed. Since urban watersheds are extremely responsive to precipitation events we modified this framework, making the bias input-dependent and transforming model results and data for residual variance stabilization. To show the improvement in uncertainty quantification we analyzed the response of a monitored stormwater system. We modeled the outlet discharge for several rain events by using a conceptual model. For comparison we computed the uncertainties with the traditional independent error model (e.g. Freni and Mannina, 2010). The quality of the prediction uncertainty bands were analyzed through residual diagnostics for the calibration phase and prediction coverage in the validation phase. The results of this study clearly show that input-dependent autocorrelated error model outperforms the independent residual representation. This is evident when comparing the fulfillment of the distribution assumptions of E. The bias error model produces realization of E that are much smaller (and so more realistic), less autocorrelated and heteroskedastic than with the current model. Furthermore, the proportion of validation data falling into the 95% credibility intervals is circa 15% higher accounting for bias than under the independence assumption. Our framework describing model bias appeared very promising in improving the fulfillment of the statistical assumptions and in decomposing predictive uncertainty. We believe that the proposed error model will be suitable for many applications because the computational expenses are only negligibly increased compared to the traditional approach. In future we will show how to use this approach with complex hydrodynamic models to further separate the effect structural deficits and input uncertainty. References P. Reichert and N. Schuwirth. 2012. Linking statistical bias description to multiobjective model calibration. Water Resources Research, 48, W09543, doi:10.1029/2011WR011391. G. Freni and G. Mannina. 2010. Bayesian approach for uncertainty quantification in water quality modelling: the influence of prior distribution. Journal of Hydrology, 392, 31-39, doi:10.1016/j.jhydrol.2010.07.043
Systematization of problems on ball estimates of a convex compactum
NASA Astrophysics Data System (ADS)
Dudov, S. I.
2015-09-01
We consider a class of finite-dimensional problems on the estimation of a convex compactum by a ball of an arbitrary norm in the form of extremal problems whose goal function is expressed via the function of the distance to the farthest point of the compactum and the function of the distance to the nearest point of the compactum or its complement. Special attention is devoted to the problem of estimating (approximating) a convex compactum by a ball of fixed radius in the Hausdorff metric. It is proved that this problem plays the role of the canonical problem: solutions of any problem in the class under consideration can be expressed via solutions of this problem for certain values of the radius. Based on studying and using the properties of solutions of this canonical problem, we obtain ranges of values of the radius in which the canonical problem expresses solutions of the problems on inscribed and circumscribed balls, the problem of uniform estimate by a ball in the Hausdorff metric, the problem of asphericity of a convex body, the problems of spherical shells of the least thickness and of the least volume for the boundary of a convex body. This makes it possible to arrange the problems in increasing order of the corresponding values of the radius. Bibliography: 34 titles.
Atias, Nir; Kupiec, Martin; Sharan, Roded
2016-01-01
The yeast mutant collections are a fundamental tool in deciphering genomic organization and function. Over the last decade, they have been used for the systematic exploration of âˆ¼6 000 000 double gene mutants, identifying and cataloging genetic interactions among them. Here we studied the extent to which these data are prone to neighboring gene effects (NGEs), a phenomenon by which the deletion of a gene affects the expression of adjacent genes along the genome. Analyzing âˆ¼90,000 negative genetic interactions observed to date, we found that more than 10% of them are incorrectly annotated due to NGEs. We developed a novel algorithm, GINGER, to identify and correct erroneous interaction annotations. We validated the algorithm using a comparative analysis of interactions from Schizosaccharomyces pombe. We further showed that our predictions are significantly more concordant with diverse biological data compared to their mis-annotated counterparts. Our work uncovered about 9500 new genetic interactions in yeast. PMID:26602688
Qi, P; Xia, P
2014-06-01
Purpose: To evaluate the dosimetric impact of systematic MLC positional errors (PEs) on the quality of volumetric-modulated arc therapy (VMAT) plans. Methods: Five patients with head-and-neck cancer (HN) and five patients with prostate cancer were randomly chosen for this study. The clinically approved VMAT plans were designed with 2â€“4 coplanar arc beams with none-zero collimator angles in the Pinnacle planning system. The systematic MLC PEs of 0.5, 1.0, and 2.0 mm on both MLC banks were introduced into the original VMAT plans using an in-house program, and recalculated with the same planned Monitor Units in the Pinnacle system. For each patient, the original VMAT plans and plans with MLC PEs were evaluated according to the dose-volume histogram information and Gamma index analysis. Results: For one primary target, the ratio of V100 in the plans with 0.5, 1.0, and 2.0 mm MLC PEs to those in the clinical plans was 98.8 Â± 2.2%, 97.9 Â± 2.1%, 90.1 Â± 9.0% for HN cases and 99.5 Â± 3.2%, 98.9 Â± 1.0%, 97.0 Â± 2.5% for prostate cases. For all OARs, the relative difference of Dmean in all plans was less than 1.5%. With 2mm/2% criteria for Gamma analysis, the passing rates were 99.0 Â± 1.5% for HN cases and 99.7 Â± 0.3% for prostate cases between the planar doses from the original plans and the plans with 1.0 mm MLC errors. The corresponding Gamma passing rates dropped to 88.9 Â± 5.3% for HN cases and 83.4 Â± 3.2% for prostate cases when comparing planar doses from the original plans and the plans with 2.0 mm MLC errors. Conclusion: For VMAT plans, systematic MLC PEs up to 1.0 mm did not affect the plan quality in term of target coverage, OAR sparing, and Gamma analysis with 2mm/2% criteria.
Hussain, Zahra; Svensson, Carl-Magnus; Besle, Julien; Webb, Ben S.; Barrett, Brendan T.; McGraw, Paul V.
2015-01-01
We describe a method for deriving the linear cortical magnification factor from positional error across the visual field. We compared magnification obtained from this method between normally sighted individuals and amblyopic individuals, who receive atypical visual input during development. The cortical magnification factor was derived for each subject from positional error at 32 locations in the visual field, using an established model of conformal mapping between retinal and cortical coordinates. Magnification of the normally sighted group matched estimates from previous physiological and neuroimaging studies in humans, confirming the validity of the approach. The estimate of magnification for the amblyopic group was significantly lower than the normal group: by 4.4 mm deg?1 at 1° eccentricity, assuming a constant scaling factor for both groups. These estimates, if correct, suggest a role for early visual experience in establishing retinotopic mapping in cortex. We discuss the implications of altered cortical magnification for cortical size, and consider other neural changes that may account for the amblyopic results. PMID:25761341
Estimating Random Errors Due to Shot Noise in Backscatter Lidar Observations
NASA Technical Reports Server (NTRS)
Liu, Zhaoyan; Hunt, William; Vaughan, Mark A.; Hostetler, Chris A.; McGill, Matthew J.; Powell, Kathy; Winker, David M.; Hu, Yongxiang
2006-01-01
In this paper, we discuss the estimation of random errors due to shot noise in backscatter lidar observations that use either photomultiplier tube (PMT) or avalanche photodiode (APD) detectors. The statistical characteristics of photodetection are reviewed, and photon count distributions of solar background signals and laser backscatter signals are examined using airborne lidar observations at 532 nm using a photon-counting mode APD. Both distributions appear to be Poisson, indicating that the arrival at the photodetector of photons for these signals is a Poisson stochastic process. For Poisson-distributed signals, a proportional, one-to-one relationship is known to exist between the mean of a distribution and its variance. Although the multiplied photocurrent no longer follows a strict Poisson distribution in analog-mode APD and PMT detectors, the proportionality still exists between the mean and the variance of the multiplied photocurrent. We make use of this relationship by introducing the noise scale factor (NSF), which quantifies the constant of proportionality that exists between the root-mean-square of the random noise in a measurement and the square root of the mean signal. Using the NSF to estimate random errors in lidar measurements due to shot noise provides a significant advantage over the conventional error estimation techniques, in that with the NSF uncertainties can be reliably calculated from/for a single data sample. Methods for evaluating the NSF are presented. Algorithms to compute the NSF are developed for the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar and tested using data from the Lidar In-space Technology Experiment (LITE). OCIS Codes:
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2015-11-01
Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.
Strömberg, Sten; Nistor, Mihaela; Liu, Jing
2014-11-01
The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2(4) full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors' impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors' influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world. PMID:25151444
Errors in Expected Human Losses Due to Incorrect Seismic Hazard Estimates
NASA Astrophysics Data System (ADS)
Wyss, M.; Nekrasova, A.; Kossobokov, V. G.
2011-12-01
The probability of strong ground motion is presented in seismic hazard maps, in which peak ground accelerations (PGA) with 10% probability of exceedance in 50 years are shown by color codes. It has become evident that these maps do not correctly give the seismic hazard. On the seismic hazard map of Japan, the epicenters of the recent large earthquakes are located in the regions of relatively low hazard. The errors of the GSHAP maps have been measured by the difference between observed and expected intensities due to large earthquakes. Here, we estimate how the errors in seismic hazard estimates propagate into errors in estimating the potential fatalities and affected population. We calculated the numbers of fatalities that would have to be expected in the regions of the nine earthquakes with more than 1,000 fatalities during the last 10 years with relatively reliable estimates of fatalities, assuming a magnitude which generates as a maximum intensity the one given by the GSHAP maps. This value is the number of fatalities to be exceeded with probability of 10% during 50 years. In most regions of devastating earthquakes, there are no instruments to measure ground accelerations. Therefore, we converted the PGA expected as a likely maximum based on the GSHAP maps to intensity. The magnitude of the earthquake that would cause the intensity expected by GSHAP as a likely maximum was calculated by M(GSHAP) = (I0 +1.5)/1.5. The numbers of fatalities, which were expected, based on earthquakes with M(GSHAP), were calculated using the loss estimating program QLARM. We calibrated this tool for each case by calculating the theoretical damage and numbers of fatalities (Festim) for the disastrous test earthquakes, generating a match with the observe numbers of fatalities (Fobs=Festim) by adjusting the attenuation relationship within the bounds of commonly observed laws. Calculating the numbers of fatalities expected for the earthquakes with M(GSHAP) will thus yield results that are comparable with the observations. The difference between FGSHAP and Festim is used here as a quantitative measure of the error in expected risk to humans, resulting from the GSHAP hazard estimates. We find that the expected fatalities and number of injured are underestimated by GSHAP by a factor of 200 (median) and 700 (average) for earthquakes M?6.9. FGSHAP can be considered approximately correct for the two smallest earthquakes (Bam, M6.8, 2003; Yogyakarta, M6.3, 2006), where the factor of underestimation is two. As a second measure of the inadequacy of GSHAP hazard estimates, we use the difference in the number of people affected as expected, NGSHAP, with the number estimated for the events that occurred, Nestim. The ratio Nestim/Ngshap equals 13 (median) and 340 (average) for the large events. Thus, we conclude that the earthquake risk to humans estimated based on GSHAP maps of PGA was underestimated at the locations of recent large disastrous earthquakes by more than two orders of magnitude.
NASA Astrophysics Data System (ADS)
Schwatke, Christian; Dettmering, Denise; Boergens, Eva
2015-04-01
Originally designed for open ocean applications, satellite radar altimetry can also contribute promising results over inland waters. Its measurements help to understand the water cycle of the system earth and makes altimetry to a very useful instrument for hydrology. In this paper, we present our methodology for estimating water level time series over lakes, rivers, reservoirs, and wetlands. Furthermore, the error estimation of the resulting water level time series is demonstrated. For computing the water level time series multi-mission satellite altimetry data is used. The estimation is based on altimeter data from Topex, Jason-1, Jason-2, Geosat, IceSAT, GFO, ERS-2, Envisat, Cryosat, HY-2A, and Saral/Altika - depending on the location of the water body. According to the extent of the investigated water body 1Hz, high-frequent or retracked altimeter measurements can be used. Classification methods such as Support Vector Machine (SVM) and Support Vector Regression (SVR) are applied for the classification of altimeter waveforms and for rejecting outliers. For estimating the water levels we use a Kalman filter approach applied to the grid nodes of a hexagonal grid covering the water body of interest. After applying an error limit on the resulting water level heights of each grid node, a weighted average water level per point of time is derived referring to one reference location. For the estimation of water level height accuracies, at first, the formal errors are computed applying a full error propagation within Kalman filtering. Hereby, the precision of the input measurements are introduced by using the standard deviation of the water level height along the altimeter track. In addition to the resulting formal errors of water level heights, uncertainties of the applied geophysical correction (e.g. wet troposphere, ionosphere, etc.) and systematic error effects are taken into account to achieve more realistic error estimates. For validation of the time series, we compare our results with gauges and external inland altimeter databases (e.g. Hydroweb). We yield very high correlations between absolute water level height time series from altimetry and gauges. Moreover, the comparisons of water level heights are also used for the validation of the error assessment. More than 200 water level time series were already computed and made public available via the "Database for Hydrological Time Series of Inland Waters" (DAHITI) which is available via http://dahiti.dgfi.tum.de .
NASA Astrophysics Data System (ADS)
Traivivatana, S.; Phongthanapanich, S.; Dechaumphai, P.
2015-09-01
A posteriori error estimation for the nodeless variable finite element method is presented. A nodeless variable finite element method using flux-based formulation is developed and combined with an adaptive meshing technique to analyze two-dimensional thermal-structural problems. The continuous flux and stresses are determined by using the flux-based formulation while the standard linear element interpolation functions are used to determine the discontinuous flux and stresses. To measure the global error, the L2 norm error is selected to find the root-mean-square error over the entire domain. The finite element formulation and its detailed finite element matrices are presented. Accuracy of the estimated error is measured by the percentage relative error. An adaptive meshing technique, that can generate meshes corresponding to solution behaviors automatically, is implemented to further improve the solution accuracy. Several examples are presented to evaluate the performance and accuracy of the combined method.
Optimum data weighting and error calibration for estimation of gravitational parameters
NASA Technical Reports Server (NTRS)
Lerch, Francis J.
1991-01-01
A new approach has been developed for determining consistent satellite-tracking data weights in solutions for the satellite-only gravitational models. The method employs subset least-squares solutions of the satellite data contained within the complete solution and requires that the differences of the parameters of subset solutions and the complete solution to be in agreement with their error estimates by adjusting the data weights. GEM-T2 model was recently computed and adjusted through a direct application of this method. The estimated data weights are markedly smaller than the weights implied by the formal uncertainties of the measurements. Orbital arc tests as well as surface gravity comparisons show significant improvements for solutions when more realistic data weighting is achieved.
Berryman, J G
2005-01-03
A detailed analytical model of random polycrystals of porous laminates has been developed. This approach permits detailed calculations of poromechanics constants as well as transport coefficients. The resulting earth reservoir model allows studies of both geomechanics and fluid permeability to proceed semi-analytically. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden or unknown microstructure on the final results can then be evaluated quantitatively. Descriptions of the use of the model and some examples of typical results on the poromechanics of such a heterogeneous reservoir are presented.
Geomechanical Analysis with Rigorous Error Estimates for a Double-Porosity Reservoir Model
Berryman, J G
2005-04-11
A model of random polycrystals of porous laminates is introduced to provide a means for studying geomechanical properties of double-porosity reservoirs. Calculations on the resulting earth reservoir model can proceed semi-analytically for studies of either the poroelastic or transport coefficients. Rigorous bounds of the Hashin-Shtrikman type provide estimates of overall bulk and shear moduli, and thereby also provide rigorous error estimates for geomechanical constants obtained from up-scaling based on a self-consistent effective medium method. The influence of hidden (or presumed unknown) microstructure on the final results can then be evaluated quantitatively. Detailed descriptions of the use of the model and some numerical examples showing typical results for the double-porosity poroelastic coefficients of a heterogeneous reservoir are presented.
Complex phase error and motion estimation in synthetic aperture radar imaging
NASA Astrophysics Data System (ADS)
Soumekh, M.; Yang, H.
1991-06-01
Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.
Prediction and standard error estimation for a finite universe total when a stratum is not sampled
Wright, T.
1994-01-01
In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.
Gilbert, E.S.; Fix, J.J.
1996-08-01
This report addresses laboratory measurement error in estimates of external doses obtained from personnel dosimeters, and investigates the effects of these errors on linear dose-response analyses of data from epidemiologic studies of nuclear workers. These errors have the distinguishing feature that they are independent across time and across workers. Although the calculations made for this report were based on Hanford data, the overall conclusions are likely to be relevant for other epidemiologic studies of workers exposed to external radiation.
NASA Technical Reports Server (NTRS)
Lakshminarayanan, M. Y.; Gunst, R. F.
1984-01-01
Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. This paper investigates the use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio.
NASA Technical Reports Server (NTRS)
Lakshminarayanan, M. Y.; Gunst, R. F.
1984-01-01
Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. The use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio is investigated.
NASA Astrophysics Data System (ADS)
Lee, Y.; Keehm, Y.
2011-12-01
Estimating the degree of weathering in stone cultural heritage, such as pagodas and statues is very important to plan conservation and restoration. The ultrasonic measurement is one of commonly-used techniques to evaluate weathering index of stone cultual properties, since it is easy to use and non-destructive. Typically we use a portable ultrasonic device, PUNDIT with exponential sensors. However, there are many factors to cause errors in measurements such as operators, sensor layouts or measurement directions. In this study, we carried out variety of measurements with different operators (male and female), different sensor layouts (direct and indirect), and sensor directions (anisotropy). For operators bias, we found that there were not significant differences by the operator's sex, while the pressure an operator exerts can create larger error in measurements. Calibrating with a standard sample for each operator is very essential in this case. For the sensor layout, we found that the indirect measurement (commonly used for cultural properties, since the direct measurement is difficult in most cases) gives lower velocity than the real one. We found that the correction coefficient is slightly different for different types of rocks: 1.50 for granite and sandstone and 1.46 for marble. From the sensor directions, we found that many rocks have slight anisotropy in their ultrasonic velocity measurement, though they are considered isotropic in macroscopic scale. Thus averaging four different directional measurement (0°, 45°, 90°, 135°) gives much less errors in measurements (the variance is 2-3 times smaller). In conclusion, we reported the error in ultrasonic meaurement of stone cultural properties by various sources quantitatively and suggested the amount of correction and procedures to calibrate the measurements. Acknowledgement: This study, which forms a part of the project, has been achieved with the support of national R&D project, which has been hosted by National Research Institute of Cultural Heritage of Cultural Heritage Administration(No. NRICH-1107-B01F).
Practical error estimates for Reynolds' lubrication approximation and its higher order corrections
Wilkening, Jon
2008-12-10
Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.
Measurement error affects risk estimates for recruitment to the Hudson River stock of striped bass.
Dunning, Dennis J; Ross, Quentin E; Munch, Stephan B; Ginzburg, Lev R
2002-06-01
We examined the consequences of ignoring the distinction between measurement error and natural variability in an assessment of risk to the Hudson River stock of striped bass posed by entrainment at the Bowline Point, Indian Point, and Roseton power plants. Risk was defined as the probability that recruitment of age-1+ striped bass would decline by 80% or more, relative to the equilibrium value, at least once during the time periods examined (1, 5, 10, and 15 years). Measurement error, estimated using two abundance indices from independent beach seine surveys conducted on the Hudson River, accounted for 50% of the variability in one index and 56% of the variability in the other. If a measurement error of 50% was ignored and all of the variability in abundance was attributed to natural causes, the risk that recruitment of age-1+ striped bass would decline by 80% or more after 15 years was 0.308 at the current level of entrainment mortality (11%). However, the risk decreased almost tenfold (0.032) if a measurement error of 50% was considered. The change in risk attributable to decreasing the entrainment mortality rate from 11 to 0% was very small (0.009) and similar in magnitude to the change in risk associated with an action proposed in Amendment #5 to the Interstate Fishery Management Plan for Atlantic striped bass (0.006)--an increase in the instantaneous fishing mortality rate from 0.33 to 0.4. The proposed increase in fishing mortality was not considered an adverse environmental impact, which suggests that potentially costly efforts to reduce entrainment mortality on the Hudson River stock of striped bass are not warranted. PMID:12805897
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
Policicchio, Alfonso; Maccallini, Enrico; Kalantzopoulos, Georgios N.; Cataldi, Ugo; Abate, Salvatore; Desiderio, Giovanni; DeltaE s.r.l., c/o UniversitÃ della Calabria, Via Pietro Bucci cubo 31D, 87036 Arcavacata di Rende , Italy and CNR-IPCF LiCryL, c/o UniversitÃ della Calabria, Via Ponte P. Bucci, Cubo 31C, 87036 Arcavacata di Rende
2013-10-15
The development of a volumetric apparatus (also known as a Sievertsâ€™ apparatus) for accurate and reliable hydrogen adsorption measurement is shown. The instrument minimizes the sources of systematic errors which are mainly due to inner volume calibration, stability and uniformity of the temperatures, precise evaluation of the skeletal volume of the measured samples, and thermodynamical properties of the gas species. A series of hardware and software solutions were designed and introduced in the apparatus, which we will indicate as f-PcT, in order to deal with these aspects. The results are represented in terms of an accurate evaluation of the equilibrium and dynamical characteristics of the molecular hydrogen adsorption on two well-known porous media. The contribution of each experimental solution to the error propagation of the adsorbed moles is assessed. The developed volumetric apparatus for gas storage capacity measurements allows an accurate evaluation over a 4 order-of-magnitude pressure range (from 1 kPa to 8 MPa) and in temperatures ranging between 77 K and 470 K. The acquired results are in good agreement with the values reported in the literature.
Finding systematic errors in tomographic data: Characterising ion-trap quantum computers
NASA Astrophysics Data System (ADS)
Monz, Thomas
2013-03-01
Quantum state tomography has become a standard tool in quantum information processing to extract information about an unknown state. Several recipes exist to post-process the data and obtain a density matrix; for instance using maximum-likelihood estimation. These evaluations, and all conclusions taken from the density matrices, however, rely on valid data - meaning data that agrees both with the measurement model and a quantum model within statistical uncertainties. Given the wide span of possible discrepancies between laboratory and theory model, data ought to be tested for its validity prior to any subsequent evaluation. The presented talk will provide an overview of such tests which are easily implemented. These will then be applied onto tomographic data from an ion-trap quantum computer.
Rozo, Eduardo; Wu, Hao-Yi; Schmidt, Fabian; /Caltech
2011-11-04
When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.
Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes
Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri
2009-01-01
We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.
ERIC Educational Resources Information Center
Paek, Insu; Cai, Li
2014-01-01
The present study was motivated by the recognition that standard errors (SEs) of item response theory (IRT) model parameters are often of immediate interest to practitioners and that there is currently a lack of comparative research on different SE (or error variance-covariance matrix) estimation procedures. The present study investigated item…
Application of parameter estimation to aircraft stability and control: The output-error approach
NASA Technical Reports Server (NTRS)
Maine, Richard E.; Iliff, Kenneth W.
1986-01-01
The practical application of parameter estimation methodology to the problem of estimating aircraft stability and control derivatives from flight test data is examined. The primary purpose of the document is to present a comprehensive and unified picture of the entire parameter estimation process and its integration into a flight test program. The document concentrates on the output-error method to provide a focus for detailed examination and to allow us to give specific examples of situations that have arisen. The document first derives the aircraft equations of motion in a form suitable for application to estimation of stability and control derivatives. It then discusses the issues that arise in adapting the equations to the limitations of analysis programs, using a specific program for an example. The roles and issues relating to mass distribution data, preflight predictions, maneuver design, flight scheduling, instrumentation sensors, data acquisition systems, and data processing are then addressed. Finally, the document discusses evaluation and the use of the analysis results.
NASA Astrophysics Data System (ADS)
Xue, Haile; Shen, Xueshun; Chou, Jifan
2015-10-01
Errors inevitably exist in numerical weather prediction (NWP) due to imperfect numeric and physical parameterizations. To eliminate these errors, by considering NWP as an inverse problem, an unknown term in the prediction equations can be estimated inversely by using the past data, which are presumed to represent the imperfection of the NWP model (model error, denoted as ME). In this first paper of a two-part series, an iteration method for obtaining the MEs in past intervals is presented, and the results from testing its convergence in idealized experiments are reported. Moreover, two batches of iteration tests were applied in the global forecast system of the Global and Regional Assimilation and Prediction System (GRAPES-GFS) for July-August 2009 and January-February 2010. The datasets associated with the initial conditions and sea surface temperature (SST) were both based on NCEP (National Centers for Environmental Prediction) FNL (final) data. The results showed that 6th h forecast errors were reduced to 10% of their original value after a 20-step iteration. Then, off-line forecast error corrections were estimated linearly based on the 2-month mean MEs and compared with forecast errors. The estimated error corrections agreed well with the forecast errors, but the linear growth rate of the estimation was steeper than the forecast error. The advantage of this iteration method is that the MEs can provide the foundation for online correction. A larger proportion of the forecast errors can be expected to be canceled out by properly introducing the model error correction into GRAPES-GFS.
Error estimates for (semi-)empirical dispersion terms and large biomacromolecules.
Korth, Martin
2013-10-14
The first-principles modeling of biomaterials has made tremendous advances over the last few years with the ongoing growth of computing power and impressive developments in the application of density functional theory (DFT) codes to large systems. One important step forward was the development of dispersion corrections for DFT methods, which account for the otherwise neglected dispersive van der Waals (vdW) interactions. Approaches at different levels of theory exist, with the most often used (semi-)empirical ones based on pair-wise interatomic C6R(-6) terms. Similar terms are now also used in connection with semiempirical QM (SQM) methods and density functional tight binding methods (SCC-DFTB). Their basic structure equals the attractive term in Lennard-Jones potentials, common to most force field approaches, but they usually use some type of cutoff function to make the mixing of the (long-range) dispersion term with the already existing (short-range) dispersion and exchange-repulsion effects from the electronic structure theory methods possible. All these dispersion approximations were found to perform accurately for smaller systems, but error estimates for larger systems are very rare and completely missing for really large biomolecules. We derive such estimates for the dispersion terms of DFT, SQM and MM methods using error statistics for smaller systems and dispersion contribution estimates for the PDBbind database of protein-ligand interactions. We find that dispersion terms will usually not be a limiting factor for reaching chemical accuracy, though some force fields and large ligand sizes are problematic. PMID:23963227